I'm working in an embedded aplication for STM32F401RBT6 and I'm trying to establish a connection with the PC (Windows 10) but the device is not recognized by the system. The code that I generated by STMCubeMX and debugged by Atollic not works. I saw and try reproduce several examples, but anything works. In the code, I have all libraries that i think necessary.
I'm have this archives generated by STMCubeMX for the CDC comunication, but I'm newbie and I don't know what I have to modify on the code for the USB be recognized by the system. Someone can help me?
Beside the point from Soup ( the failing malloc caused by a heap that is only 0x200 by default) some Windows version have a problem with the line coding in the example.
In the usbd_cdc_if.c you should add:
/* USER CODE BEGIN PRIVATE_VARIABLES */
USBD_CDC_LineCodingTypeDef LineCoding =
{
115200, /* baud rate*/
0x00, /* stop bits-1*/
0x00, /* parity - none*/
0x08 /* nb. of bits 8*/
};
And a little bit below
static int8_t CDC_Control_FS(uint8_t cmd, uint8_t* pbuf, uint16_t length)
.
.
.
case CDC_SET_LINE_CODING:
LineCoding.bitrate = (uint32_t)(pbuf[0] | (pbuf[1] << 8) |\
(pbuf[2] << 16) | (pbuf[3] << 24));
LineCoding.format = pbuf[4];
LineCoding.paritytype = pbuf[5];
LineCoding.datatype = pbuf[6];
break;
case CDC_GET_LINE_CODING:
pbuf[0] = (uint8_t)(LineCoding.bitrate);
pbuf[1] = (uint8_t)(LineCoding.bitrate >> 8);
pbuf[2] = (uint8_t)(LineCoding.bitrate >> 16);
pbuf[3] = (uint8_t)(LineCoding.bitrate >> 24);
pbuf[4] = LineCoding.format;
pbuf[5] = LineCoding.paritytype;
pbuf[6] = LineCoding.datatype;
break;
So the host won't get undefined data if he tries to set the line coding.
Inside USBD_CDC_Init(..) function there is a malloc function which allocates about 540 Bytes memory on the Heap.
This was not taken into account from CubeMX when code generated .
So , at least you must define the Heap size taking into account theese extra Bytes, to have the USB CDC port working.
Related
I'm trying to write an operating system using OSDev and others. Now, I'm stuck making a keyboard interrupt handler. When I compile my OS and run the kernel using qemu-system-i386 -kernel kernel/myos.kernel everything works perfectly. When I put everything into an ISO image and try to run it using qemu-system-i386 -cdrom myos.iso, it restarts when I press a key. I think it's caused by some problems in my interrupt handler or a bad IDT entry.
My keyboard handler (AT&T syntax):
.globl keyboard_handler
.align 4
keyboard_handler:
pushal
cld
call keyboard_handler_main
popal
iret
My main handler in C:
void keyboard_handler_main(void) {
unsigned char status;
char keycode;
/* write EOI */
write_port(0x20, 0x20);
status = read_port(KEYBOARD_STATUS_PORT);
/* Lowest bit of status will be set if buffer is not empty */
if (status & 0x01) {
keycode = read_port(KEYBOARD_DATA_PORT);
if(keycode < 0)
return;
if(keycode == ENTER_KEY_CODE) {
printf("\n");
return;
}
printf("%c", keyboard_map[(unsigned char) keycode]);
}
}
C function I use to load the:
void idt_init(void)
{
//unsigned long keyboard_address;
unsigned long idt_address;
unsigned long idt_ptr[2];
auto keyboard_address = (*keyboard_handler);
IDT[0x21].offset_lowerbits = keyboard_address & 0xffff;
IDT[0x21].selector = KERNEL_CODE_SEGMENT_OFFSET;
IDT[0x21].zero = 0;
IDT[0x21].type_attr = INTERRUPT_GATE;
IDT[0x21].offset_higherbits = (keyboard_address & 0xffff0000) >> 16;
/* Ports
* PIC1 PIC2
*Command 0x20 0xA0
*Data 0x21 0xA1
*/
write_port(0x20 , 0x11);
write_port(0xA0 , 0x11);
write_port(0x21 , 0x20);
write_port(0xA1 , 0x28);
write_port(0x21 , 0x00);
write_port(0xA1 , 0x00);
write_port(0x21 , 0x01);
write_port(0xA1 , 0x01);
write_port(0x21 , 0xff);
write_port(0xA1 , 0xff);
idt_address = (unsigned long)IDT ;
idt_ptr[0] = (sizeof (struct IDT_entry) * IDT_SIZE) + ((idt_address & 0xffff) << 16);
idt_ptr[1] = idt_address >> 16 ;
load_idt(idt_ptr);
printf("%s\n", "loadd");
}
Files are organized the same way as OSDev's Meaty Skeleton. I do have a different bootloader.
Based on experience I believed that this issue was related to a GDT not being set up. Often when someone says interrupts work with QEMU's -kernel option but not a real version of GRUB it is often related to the kernel developer not creating and loading their own GDT. The Mulitboot Specification says:
‘GDTR’
Even though the segment registers are set up as described above, the ‘GDTR’ may be invalid, so the OS image must not load any segment registers (even just reloading the same values!) until it sets up its own ‘GDT’.
When using QEMU with the -kernel option the GDTR is usually valid but it isn't guaranteed to be that way. When using a real version of GRUB (installed to a hard drive, virtual image, ISO etc) to boot you may discover that the GDTR isn't in fact valid. The first time you attempt to reload any segment register with a selector (even if it is the same value) it may fault. When using interrupts the code segment (CS) will be modified which may cause a triple fault and reboot.
As well the Multiboot Specification doesn't say which selectors point to code or data descriptors. Since the layout of the GDT entries is not known or guaranteed by the Multiboot specification it poses a problem for filling in the IDT entries. Each IDT entry needs to specify a specific selector that points to a code segment.
The Meaty Skeleton tutorial on OSDev doesn't set up interrupts, nor do they modify any of the segment registers so that code likely works with QEMU's -kernel option and a real version of GRUB. Adding IDT and interrupt code on top of the base tutorial probably lead to the failure you are seeing when booting with GRUB. That tutorial should probably make it clearer that you should set up your own GDT and not rely on the one set up by the Multiboot loader.
I am trying to test the i2c communication of the MAX77651 chip before programming it.
So here is my setup to do so:
I have an UMFT4222ev connected to my Linux laptop by USB. This chip has his SCL and SDA linked to the SDA and SCL of my MAX77651 thanks to the Evaluation Kit for the MAX77651. My MAX77651evkit is powered with 3,7V on the Vbatt pin.
I also installed the mraa librarie from git hub and the libft4222. I know mraa is well installed because i tried it with and example.
I was told that the mraa library takes in charge the setup of the FT4222 so i only used mraa functions to make my program.
I searched on the website of Maxim integrated the i2c slave address and one register where i could read data and check if everyting is working. I then read the i2c protocol of communication to read a single register which is available here : https://datasheets.maximintegrated.com/en/ds/MAX77650-MAX77651.pdf at the page 78.
With all those informations I tried to make my "test program". I solved the compiling errors but when I execute the program I can't get what is in the register which should be 0xFF.
Here is my program:
#include "stdio.h"
#include "syslog.h"
#include "string.h"
#include "unistd.h"
#include "mraa/i2c.h"
#include "mraa.h"
#define I2C_ADDR 0x48
int
main(int argc, char *argv[])
{
uint8_t *message;
*message=0XAC;
int i,j,k;
char reg_a_lire = 0x06;
mraa_init();
mraa_i2c_context i2c;
i2c = mraa_i2c_init(0);
mraa_i2c_frequency(i2c,MRAA_I2C_FAST);
mraa_i2c_address(i2c, I2C_ADDR);
mraa_i2c_write_byte(i2c,0x90);
mraa_i2c_read(i2c, message,1);
mraa_i2c_write_byte(i2c,reg_a_lire);
mraa_init();
mraa_i2c_write_byte(i2c,0x91);
mraa_i2c_read(i2c, message,1);
mraa_i2c_read(i2c, message,1);
printf("%02X \n", *message);
mraa_i2c_stop(i2c);
return 0;
}
Here is the actual output :
alex#cyclonit-laptop ~/Test_alex/tests $ ./a.out
AC
And i would like to get FF instead of AC.
I think my error could come from something i missed to initialize the FT4222 or from the MAX77651 which I maybe did nt power up correctly and its not sufficient to put 3,7V on Vbatt. But maybe this is a problem with my program because i don't know much about uint8_t and I made something wrong.
I am hoping someone has experience with FT4222 and/or MAX77651 and can help me.
I think you are confused by pg. 75 of the MAX77651 datasheet.
ADDRESS | 7-BIT SLAVE ADDRESS | 8-BIT WRITE ADDRESS | 8-BIT READ ADDRESS
Main Address | 0x48, 0b 100 1000 | 0x90, 0b 1001 0000 | 0x91, 0b 1001 0001
(ADDR = 1)* | | |
-------------+---------------------+---------------------+-------------------
Main Address | 0x40, 0b 100 0000 | 0x80, 0b 1000 0000 | 0x81, 0b 1000 0001
(ADDR = 0)* | | |
Depending on your view of the I²C specification, you either use the 7-bit address or use both 8-bit addresses. The reason is that the first I²C transaction sends the following 8 bit:
76543210
\_____/\-- R/#W
\------ 7-bit Slave Address
In your case that's 0x48 shifted one bit to the left followed by either a 1 for read mode or a 0 for write mode. Notice that
(0x40 << 1) | 0x00 == 0x80 == 8-bit Write Address (ADDR = 0)
(0x40 << 1) | 0x01 == 0x81 == 8-bit Read Address (ADDR = 0)
(0x48 << 1) | 0x00 == 0x90 == 8-bit Write Address (ADDR = 1)
(0x48 << 1) | 0x01 == 0x91 == 8-bit Read Address (ADDR = 1)
Some people like to use the slave address while other people prefer to give separate addresses so it's clear what the first byte of communication will look like.
So you should remove the extraneous mraa_i2c_write_byte(i2c, ...); lines from your code, as their intention seems to be to send the read or write address to the slave.
Check out the MRAA library API reference. There are functions that abstract reading and writing registers, either byte-wide [8-bit] registers or word-wide [16-bit] registers:
mraa_i2c_read_byte_data
mraa_i2c_read_bytes_data
mraa_i2c_write_byte_data
In both cases you should check the return codes to know if the action succeeded. In your case, your message is likely not altered, because there was a stop/error condition beforehand, because the slave did either not ACK its slave address or it did not ACK the 0x90/0x91 data bytes you mistakenly sent, as they don't show up in the Programmer's Guide as valid addresses.
Another issue is that you try to read register INTM_GLBL (0x06; Global Interrupt Mask Register) using two consecutive read operations. I'm not sure if your intention is to read register 0x06 twice or if you intended to read INT_M_CHG (0x07; Global Interrupt Mask for Charger), because that's not what it will do. Notice the description on pg. 78 of the datasheet:
Note that when the MAX77650/MAX77651 receive a stop
they do not modify their register pointer.
This is typical behavior for I²C slaves that support multi-byte/sequential reads. You will have to issue a sequential read operation for multiple bytes of data if you want to read multiple registers, e.g. using mraa_i2c_read_bytes_data.
Something like the following might get you on the right track. It is supposed to read the CID and CLKS settings and wait forever if a read error occurred. If successful, it will disable charging, all outputs, setup the red LED to blink once every other second and -- as soon as you uncomment it -- transition the On/Off Controller into On Via Software state to enable bias and the LED.
#define MAX77651_SLA 0x48u
#define CNFG_GLBL_REG 0x10u
#define CID_REG 0x11u
#define CNFG_CHG_B_REG 0x19u
#define CNFG_SBB0_B_REG 0x2Au
#define CNFG_SBB1_B_REG 0x2Cu
#define CNFG_SBB2_B_REG 0x2Eu
#define CNFG_LDO_B_REG 0x39u
#define CNFG_LED1_A_REG 0x41u
#define CNFG_LED1_B_REG 0x44u
#define CNFG_LED_TOP_REG 0x46u
int main(int argc, char *argv[]) {
uint8_t data;
int res;
mraa_init();
mraa_i2c_context i2c0;
i2c0 = mraa_i2c_init(0);
mraa_i2c_frequency(i2c0, MRAA_I2C_STD);
mraa_i2c_address(i2c0, MAX77651_SLA);
res = mraa_i2c_read_byte_data(i2c0, CID_REG);
if (res < 0) {
printf("Reading CID_REG failed.\n");
mraa_i2c_stop(i2c0);
while(1);
}
data = res;
printf("CID_REG: CLKS = %02X CID = %02X\n", ((data & 0x70u) >> 4), (data & 0x0Fu));
/* you should check return values here */
mraa_i2c_write_byte_data(i2c0, 0x00u /* CHG_EN = off */, CNFG_CHG_B_REG);
mraa_i2c_write_byte_data(i2c0, 0x04u /* EN_LDO = off */, CNFG_LDO_B_REG);
mraa_i2c_write_byte_data(i2c0, 0x04u /* EN_SBB0 = off */, CNFG_SBB0_B_REG);
mraa_i2c_write_byte_data(i2c0, 0x04u /* EN_SBB1 = off */, CNFG_SBB1_B_REG);
mraa_i2c_write_byte_data(i2c0, 0x04u /* EN_SBB2 = off */, CNFG_SBB2_B_REG);
/* set up red led to toggle every second */
mraa_i2c_write_byte_data(i2c0, 0x17u /* P = 1s, D = 50% */, CNFG_LED1_B_REG);
mraa_i2c_write_byte_data(i2c0, 0x98u /* FS = 6.4mA, BRT = 5.0mA */, CNFG_LED1_A_REG);
mraa_i2c_write_byte_data(i2c0, 0x01u /* EN_LED_MSTR = on */, CNFG_LED_TOP_REG);
// DANGER ZONE: enable only when you know what this is supposed to do
//mraa_i2c_write_byte_data(i2c0, 0x10u /* SBIA_EN = on, SBIA_LPM = normal */, CNFG_GLBL_REG);
mraa_i2c_stop(i2c0);
while(1);
}
I managed to read I2C registers of MAX77651.
First looking at the hardware part I had to make sure that VIO had the right voltage like #FRob said in hid comment.
Then for the software, I stopped using the mraa library because i could'nt control everything. I used the FT4222 library which allowed me to open and initiate the FT4222 device. I took some part of the I2C example in this library for the initialization of the device. Then I noticed that I had to first use the FT4222's write function to send the register i wanted to read , then simply use the read function to get my result. Those are the two steps required.
I won't post the whole program i made as it is mainly taken from the I2C example for initialization but here is the part I added to read my register 0X06 which is supposed to contain 0xFF:
uint8 resultat=0x11;
uint8 *p_resultat=&resultat;
int chiffre;
slaveAddr
uint16 bytesToWrite2 = 1;
uint16 bytesWritten2=1;
uint8 valeur= 0x06; // REGISTER TO READ
uint8 *p_valeur=&valeur;
FT4222_I2CMaster_Write(ftHandle,slaveAddr,
p_valeur,bytesToWrite2,&bytesWritten); //INDICATES WHICH REGISTER TO
// READ
chiffre = FT4222_I2CMaster_Read(ftHandle,
slaveAddr,p_resultat,1, &bytesRead); // READ REGISTER
printf("The content of the register %02X is : %02X \n " ,
valeur,resultat); // DISPLAY RESULT
printf("reading success if : %d = 0 \n " , chiffre);
// READING SUCCESS ?
With this code and the initialization i get the following result :
alex#cyclonit-laptop ~/Downloads/libft4222-1.2.1.4/examples $ ./a.out
Device 0 is interface A of mode-0 FT4222H:
0x0403601c FT4222 A
Chip version: 42220200, LibFT4222 version: 01020104
The content of the register 06 is : FF
reading success if : 0 = 0
Skipping interface B of mode-0 FT4222H.
If someone has the same problem you are free to ask me your question by answering this post.I am very thankful to the people here who helped me!
I am working on a library for controlling the M95128-W EEPROM from an STM32 device. I have the library writing and reading back data however the first byte of each page it not as expected and seems to be fixed at 0x04.
For example I write 128 bytes across two pages starting at 0x00 address with value 0x80. When read back I get:
byte[0] = 0x04;
byte[1] = 0x80;
byte[2] = 0x80;
byte[3] = 0x80;
.......
byte[64] = 0x04;
byte[65] = 0x80;
byte[66] = 0x80;
byte[67] = 0x80;
I have debugged the SPI with a logic analyzer and confirmed the correct bytes are being sent. When using the logic analyzer on the read command the mysterios 0x04 is transmitted from the EEPROM.
Here is my code:
void FLA::write(const void* data, unsigned int dataLength, uint16_t address)
{
int pagePos = 0;
int pageCount = (dataLength + 64 - 1) / 64;
int bytePos = 0;
int startAddress = address;
while (pagePos < pageCount)
{
HAL_GPIO_WritePin(GPIOB,GPIO_PIN_2, GPIO_PIN_SET); // WP High
chipSelect();
_spi->transfer(INSTRUCTION_WREN);
chipUnselect();
uint8_t status = readRegister(INSTRUCTION_RDSR);
chipSelect();
_spi->transfer(INSTRUCTION_WRITE);
uint8_t xlow = address & 0xff;
uint8_t xhigh = (address >> 8);
_spi->transfer(xhigh); // part 1 address MSB
_spi->transfer(xlow); // part 2 address LSB
for (unsigned int i = 0; i < 64 && bytePos < dataLength; i++ )
{
uint8_t byte = ((uint8_t*)data)[bytePos];
_spi->transfer(byte);
printConsole("Wrote byte to ");
printConsoleInt(startAddress + bytePos);
printConsole("with value ");
printConsoleInt(byte);
printConsole("\n");
bytePos ++;
}
_spi->transfer(INSTRUCTION_WRDI);
chipUnselect();
HAL_GPIO_WritePin(GPIOB,GPIO_PIN_2, GPIO_PIN_RESET); //WP LOW
bool writeComplete = false;
while (writeComplete == false)
{
uint8_t status = readRegister(INSTRUCTION_RDSR);
if(status&1<<0)
{
printConsole("Waiting for write to complete....\n");
}
else
{
writeComplete = true;
printConsole("Write complete to page ");
printConsoleInt(pagePos);
printConsole("# address ");
printConsoleInt(bytePos);
printConsole("\n");
}
}
pagePos++;
address = address + 64;
}
printConsole("Finished writing all pages total bytes ");
printConsoleInt(bytePos);
printConsole("\n");
}
void FLA::read(char* returndata, unsigned int dataLength, uint16_t address)
{
chipSelect();
_spi->transfer(INSTRUCTION_READ);
uint8_t xlow = address & 0xff;
uint8_t xhigh = (address >> 8);
_spi->transfer(xhigh); // part 1 address
_spi->transfer(xlow); // part 2 address
for (unsigned int i = 0; i < dataLength; i++)
returndata[i] = _spi->transfer(0x00);
chipUnselect();
}
Any suggestion or help appreciated.
UPDATES:
I have tried writing sequentially 255 bytes increasing data to check for rollover. The results are as follows:
byte[0] = 4; // Incorrect Mystery Byte
byte[1] = 1;
byte[2] = 2;
byte[3] = 3;
.......
byte[63] = 63;
byte[64] = 4; // Incorrect Mystery Byte
byte[65] = 65;
byte[66] = 66;
.......
byte[127] = 127;
byte[128] = 4; // Incorrect Mystery Byte
byte[129} = 129;
Pattern continues. I have also tried writing just 8 bytes from address 0x00 and the same problem persists so I think we can rule out rollover.
I have tried removing the debug printConsole and it has had no effect.
Here is a SPI logic trace of the write command:
And a close up of the first byte that is not working correctly:
Code can be viewed on gitlab here:
https://gitlab.com/DanielBeyzade/stm32f107vc-home-control-master/blob/master/Src/flash.cpp
Init code of SPI can be seen here in MX_SPI_Init()
https://gitlab.com/DanielBeyzade/stm32f107vc-home-control-master/blob/master/Src/main.cpp
I have another device on the SPI bus (RFM69HW RF Module) which works as expected sending and receiving data.
The explanation was actually already given by Craig Estey in his answer. You do have a rollover. You write full page and then - without cycling the CS pin - you send INSTRUCTION_WRDI command. Guess what's the binary code of this command? If you guessed that it's 4, then you're absolutely right.
Check your code here:
chipSelect();
_spi->transfer(INSTRUCTION_WRITE);
uint8_t xlow = address & 0xff;
uint8_t xhigh = (address >> 8);
_spi->transfer(xhigh); // part 1 address MSB
_spi->transfer(xlow); // part 2 address LSB
for (unsigned int i = 0; i < 64 && bytePos < dataLength; i++ )
{
uint8_t byte = ((uint8_t*)data)[bytePos];
_spi->transfer(byte);
// ...
bytePos ++;
}
_spi->transfer(INSTRUCTION_WRDI); // <-------------- ROLLOEVER!
chipUnselect();
With these devices, each command MUST start with cycling CS. After CS goes low, the first byte is interpreted as command. All remaining bytes - until CS is cycled again - are interpreted as data. So you cannot send multiple commands in a single "block" with CS being constantly pulled low.
Another thing is that you don't need WRDI command at all - after the write instruction is terminated (by CS going high), the WEL bit is automatically reset. See page 18 of the datasheet:
The Write Enable Latch (WEL) bit, in fact, becomes reset by any of the
following events:
• Power-up
• WRDI instruction execution
• WRSR instruction completion
• WRITE instruction completion.
Caveat: I don't have a definitive solution, just some observations and suggestions [that would be too large for a comment].
From 6.6: Each time a new data byte is shifted in, the least significant bits of the internal address counter are incremented. If more bytes are sent than will fit up to the end of the page, a condition known as “roll-over” occurs. In case of roll-over, the bytes exceeding the page size are overwritten from location 0 of the same page.
So, in your write loop code, you do: for (i = 0; i < 64; i++). This is incorrect in the general case if the LSB of address (xlow) is non-zero. You'd need to do something like: for (i = xlow % 64; i < 64; i++)
In other words, you might be getting the page boundary rollover. But, you mentioned that you're using address 0x0000, so it should work, even with the code as it exists.
I might remove the print statements from the loop as they could have an effect on the serialization timing.
I might try this with an incrementing data pattern: (e.g.) 0x01,0x02,0x03,... That way, you could see which byte is rolling over [if any].
Also, try writing a single page from address zero, and write less than the full page size (i.e. less that 64 bytes) to guarantee that you're not getting rollover.
Also, from figure 13 [the timing diagram for WRITE], it looks like once you assert chip select, the ROM wants a continuous bit stream clocked precisely, so you may have a race condition where you're not providing the data at precisely the clock edge(s) needed. You may want to use the logic analyzer to verify that the data appears exactly in sync with clock edge as required (i.e. at clock rising edge)
As you've probably already noticed, offset 0 and offset 64 are getting the 0x04. So, this adds to the notion of rollover.
Or, it could be that the first data byte of each page is being written "late" and the 0x04 is a result of that.
I don't know if your output port has a SILO so you can send data as in a traditional serial I/O port or do you have to maintain precise bit-for-bit timing (which I presume the _spi->transfer would do)
Another thing to try is to write a shorter pattern (e.g. 10 bytes) starting at a non-zero address (e.g. xhigh = 0; xlow = 4) and the incrementing pattern and see how things change.
UPDATE:
From your update, it appears to be the first byte of each page [obviously].
From the exploded view of the timing, I notice SCLK is not strictly uniform. The pulse width is slightly erratic. Since the write data is sampled on the clock rising edge, this shouldn't matter. But, I wonder where this comes from. That is, is SCLK asserted/deasserted by the software (i.e. transfer) and SCLK is connected to another GPIO pin? I'd be interested in seeing the source for the transfer function [or a disassembly].
I've just looked up SPI here: https://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus and it answers my own question.
From that, here is a sample transfer function:
/*
* Simultaneously transmit and receive a byte on the SPI.
*
* Polarity and phase are assumed to be both 0, i.e.:
* - input data is captured on rising edge of SCLK.
* - output data is propagated on falling edge of SCLK.
*
* Returns the received byte.
*/
uint8_t SPI_transfer_byte(uint8_t byte_out)
{
uint8_t byte_in = 0;
uint8_t bit;
for (bit = 0x80; bit; bit >>= 1) {
/* Shift-out a bit to the MOSI line */
write_MOSI((byte_out & bit) ? HIGH : LOW);
/* Delay for at least the peer's setup time */
delay(SPI_SCLK_LOW_TIME);
/* Pull the clock line high */
write_SCLK(HIGH);
/* Shift-in a bit from the MISO line */
if (read_MISO() == HIGH)
byte_in |= bit;
/* Delay for at least the peer's hold time */
delay(SPI_SCLK_HIGH_TIME);
/* Pull the clock line low */
write_SCLK(LOW);
}
return byte_in;
}
So, the delay times need be at least the ones the ROM needs. Hopefully, you can verify that is the case.
But, I also notice that on the problem byte, the first bit of the data appears to lag its rising clock edge. That is, I would want the data line to be stabilized before clock rising edge.
But, that assumes CPOL=0,CPHA=1. Your ROM can be programmed for that mode or CPOL=0,CPHA=0, which is the mode used by the sample code above.
This is what I see from the timing diagram. It implies that the transfer function does CPOL=0,CPHA=0:
SCLK
__
| |
___| |___
DATA
___
/ \
/ \
This is what I originally expected (CPOL=0,CPHA=1) based on something earlier in the ROM document:
SCLK
__
| |
___| |___
DATA
___
/ \
/ \
The ROM can be configured to use either CPOL=0,CPHA=0 or CPOL=1,CPHA=1. So, you may need to configure these values to match the transfer function (or vice-versa) And, verify that the transfer function's delay times are adequate for your ROM. The SDK may do all this for you, but, since you're having trouble, double checking this may be worthwhile (e.g. See Table 18 et. al. in the ROM document).
However, since the ROM seems to respond well for most byte locations, the timing may already be adequate.
One thing you might also try. Since it's the first byte that is the problem, and here I mean first byte after the LSB address byte, the memory might need some additional [and undocumented] setup time.
So, after the transfer(xlow), add a small spin loop after that before entering the data transfer loop, to give the ROM time to set up for the write burst [or read burst].
This could be confirmed by starting xlow at a non-zero value (e.g. 3) and shortening the transfer. If the problem byte tracks xlow, that's one way to verify that the setup time may be required. You'd need to use a different data value for each test to be sure you're not just reading back a stale value from a prior test.
I want to set up the UART on a ATmega88-PA. First I was trying to set an interrupt on UDRE register but this was not working, so for the transmission I use normal polling.
Because the code was not working I started again from 0 with a basic program.
#define F_CPU 1000000UL
#define USART_BAUDRATE 9600
#define BAUD_PRESCALE (((F_CPU / (USART_BAUDRATE * 8UL))) - 1)
char ReceivedByte = '#';
int main (void)
{
UCSR0A = (1 << U2X0);
/* Turn on the transmission and reception circuitry. */
UCSR0B = (1 << RXEN0) | (1 << TXEN0);
/* Use 8-bit character sizes. */
//UCSR0C = (1 << UCSZ00) | (1 << UCSZ01);
/* BAUD prescale */
UBRR0 = 12;
/* Load upper 8-bits of the baud rate value into the high byte of the UBRR register. */
//UBRR0H = (BAUD_PRESCALE >> 8);
/* Load lower 8-bits of the baud rate value into the low byte of the UBRR register. */
//UBRR0L = BAUD_PRESCALE;
UCSR0B |= (1 << RXCIE0);
sei();
DDRB |= 0x04;
PORTB &= ~0x04;
for (;;)
{
/* Do nothing until data have been received and is ready to be read from UDR. */
//while ((UCSR0A & (1 << RXC0)) == 0) {};
/* Fetch the received byte value into the variable "ByteReceived". */
//ReceivedByte = UDR0;
if(ReceivedByte == '1')
PORTB |=0x04;
else
PORTB &=~0x04;
/* Do nothing until UDR is ready for more data to be written to it. */
while ((UCSR0A & (1 << UDRE0)) == 0) {};
/* Echo back the received byte back to the computer. */
UDR0 = ReceivedByte;
}
}
ISR(USART_RX_vect)
{
ReceivedByte = UDR0;
}
And the code is working but when I open an arduino serial monitor and connect my module to that, I receive my poor # but alog with some garbage. Not all the time but mostly, the garbage is 1 or 2 byte. Can someone help me?
EDIT: It seams that when I send from my bleutooth data to a Samsung galaxy S3 the data is perfect...I do not have any clue why on serial monitor, and also when sending data using the same bluetooth to laptop I got a lot of garbage along with the data. If this helps you
answearing my qestion, will be great.
EDIT: sorry forget the last edit, it is send only a char ok, I change the char and also garbage is there. When I send a string is unreadable.
EDIT : As I commneted on the post below of embedded_guy , I solve the problem inserting a _delay_ms(1) after sending each byte. and it is working right now. I believe the statement
while ((UCSR0A & (1 << UDRE0)) == 0) {};
is not doing its job. Hope this will help others.
I don't know if this will work for you, but I really only see a couple of things that could be an issue.
First, all of the examples of setting the BAUD prescale that I could find used two instructions with the high and low registers of UBRR0. If you have already stepped through your code and examined that register to ensure that it is correctly configured, then that is not the issue. Otherwise, I would recommend setting it like this for the value you have it set to in your code:
UBRR0H = 12;
UBRR0L = 0;
The other thing I see is that you are never setting UCSR0C. You have it commented out and I would expect it to operate correctly with its default (reset condition) settings, but it is always good to be explicit just in case.
Finally, you may want to take a look at this page on Simple Serial Communications.
EDIT
Based on your most recent edit, I would take the bluetooth out of the picture. I would recommend connecting a logic analyzer to the UART transmit pin of your microcontroller and see if the data coming out of the atmega is what you expected. If that data is good, I would begin looking at why the bluetooth is not working as I anticipated.
try to use F_CPU with at least 2MHz
make your ReceivedByte volatile, try it like this:
volatile unsigned char ReceivedByte;
I always seem to encounter this dilemma when writing low level code for MCU's.
I never know where to declare pin definitions so as to make the code as reusable as possible.
In this case Im writing a driver to interface an 8051 to a MCP4922 12bit serial DAC.
Im unsure how/where I should declare the pin definitions for The CS(chip select) and LDAC(data latch) for the DAC. At the moment there declared in the header file for the driver.
Iv done a lot of research trying to figure out the best approach but havent really found anything.
Im basically want to know what the best practices... if there are some books worth reading or online information, examples etc, any recommendations would be welcome.
Just a snippet of the driver so you get the idea
/**
#brief This function is used to write a 16bit data word to DAC B -12 data bit plus 4 configuration bits
#param dac_data A 12bit word
#param ip_buf_unbuf_select Input Buffered/unbuffered select bit. Buffered = 1; Unbuffered = 0
#param gain_select Output Gain Selection bit. 1 = 1x (VOUT = VREF * D/4096). 0 =2x (VOUT = 2 * VREF * D/4096)
*/
void MCP4922_DAC_B_TX_word(unsigned short int dac_data, bit ip_buf_unbuf_select, bit gain_select)
{
unsigned char low_byte=0, high_byte=0;
CS = 0; /**Select the chip*/
high_byte |= ((0x01 << 7) | (0x01 << 4)); /**Set bit to select DAC A and Set SHDN bit high for DAC A active operation*/
if(ip_buf_unbuf_select) high_byte |= (0x01 << 6);
if(gain_select) high_byte |= (0x01 << 5);
high_byte |= ((dac_data >> 8) & 0x0F);
low_byte |= dac_data;
SPI_master_byte(high_byte);
SPI_master_byte(low_byte);
CS = 1;
LDAC = 0; /**Latch the Data*/
LDAC = 1;
}
This is what I did in a similar case, this example is for writing an I²C driver:
// Structure holding information about an I²C bus
struct IIC_BUS
{
int pin_index_sclk;
int pin_index_sdat;
};
// Initialize I²C bus structure with pin indices
void iic_init_bus( struct IIC_BUS* iic, int idx_sclk, int idx_sdat );
// Write data to an I²C bus, toggling the bits
void iic_write( struct IIC_BUS* iic, uint8_t iicAddress, uint8_t* data, uint8_t length );
All pin indices are declared in an application-dependent header file to allow quick overview, e.g.:
// ...
#define MY_IIC_BUS_SCLK_PIN 12
#define MY_IIC_BUS_SCLK_PIN 13
#define OTHER_PIN 14
// ...
In this example, the I²C bus implementation is completely portable. It only depends on an API that can write to the chip's pins by index.
Edit:
This driver is used like this:
// main.c
#include "iic.h"
#include "pin-declarations.h"
main()
{
struct IIC_BUS mybus;
iic_init_bus( &mybus, MY_IIC_BUS_SCLK_PIN, MY_IIC_BUS_SDAT_PIN );
// ...
iic_write( &mybus, 0x42, some_data_buffer, buffer_length );
}
In one shop I worked at, the pin definitions were put into a processor specific header file. At another shop, I broke the header files into themes associated with modules in the processor, such as DAC, DMA and USB. A master include file for the processor included all of these themed header files. We could model different varieties of the same processor by include different module header files in the processor file.
You could create an implementation header file. This file would define I/O pins in terms of the processor header file. This gives you one layer of abstraction between your application and the hardware. The idea is to loosely couple the application from hardware as much as possible.
If only the driver needs to know about the CS pin, then the declaration should not appear in the header, but within the driver module itself. Code re-use is best served by hiding data at the most restrictive scope possible.
In the event that an external module needs to control CS, add an access function to the device driver module so that you have single point control. This is useful if during debugging you need to know where and when an I/O pin is being asserted; you only have one point to apply instrumentation or breakpoints.
The answer with the run-time configuration will work for a decent CPU like ARM, PowerPC...but the author is running a 8051 here. #define is probably the best way to go. Here's how I would break it down:
blah.h:
#define CSN_LOW() CS = 0
#define CSN_HI() CS = 1
#define LATCH_STROBE() \
do { LDAC = 0; LDAC = 1; } while (0)
blah.c:
#include <blah.h>
void blah_update( U8 high, U8 low )
{
CSN_LOW();
SPI_master_byte(high);
SPI_master_byte(low);
CSN_HI();
LATCH_STROBE();
}
If you need to change the pin definition, or moved to a different CPU, it should be obvious where you need to update. And it's also helps when you have to adjust the timing on the bus (ie. insert a delay here and there) as you don't need to change all over the place. Hope it helps.