MCP3424 ADC Problem: Not saving config byte / Reads not according to datasheet - c

I am having a bit of a hard time with an MCP3424 ADC. (datasheet)
Setup
The device is connected via I2C (100kB/s), the address is "1101000" (A1 and A2 tied to GND).
I can communicate with the device and write the configuration byte according to the timing diagram on page 21.
RESET
According to page 25 it is recommended to reset the device once via a general address call.
This looks like it is working since the device does send the ACK bit (9th bit held LOW):
PICTURE: Oscilloscope - General call RESET
Write config byte
After waiting >300us (power up again), I write the configuration byte which is also acknowledged:
Default config register byte (page 18):
RDY
C1
C0
O/C
S1
S0
G1
G0
0
0
0
1
0
0
0
0
RDY bit is not relevant (see conversion mode)
Channel 0 is selected
Conversion mode is continuous (-> setting RDY bit has no effect)
Resolution is 12bit
Gain = 1
Now for a test I want to read from Channel 2, so the configuration byte is: 01010000
PICTURE: Datasheet diagram vs Oscilloscope - Write config byte
Reading Data from device
According to the timing diagram on page 24, one has to read at least 3 bytes when a using 12 to 16 bit resolution. First and second byte (after the address byte) is the actual value, the third byte is the configuration register which is repeated as long as clock is provided and the master does not send NAK:
Problem
When I read 4 bytes:
I do not get a valid raw voltage value (VDD on CH3)
The device sends the default config byte as third byte.
The device does not repeat the config byte on the forth byte.
PICTURE: Datasheet diagram vs Oscilloscope - Read 4 bytes
What I tried also
I did try another MCP3424 IC to rule out the possibility of it beeing faulty.
I looked at some Python libraries and found that people were using the smbus_i2c_read_block_data method. (apparingly because there was no method for just reading n-bytes without sending a command (or register) byte first).
I did also try that using i2c_smbus_read_i2c_block_data() method from "i2c-utils.h"
with 0x00 as command code (also treid config byte).
But that only gives empty responses (still device send ACK bits.):
PICTURE: Oscilloscope - i2c_read_block_data
Your help is very much appreciated!
Cheers, Roman

SOLUTION
The I2C Address 1101000 was blocked by the kernel configuration.
It showed up as UU when probin with i2c-detect. (probably reserved for the RTC?)
Changing the address to 1101011 solved the problem and the device behaved as expected.

Related

Difference between API 1 and API 2 mode of XBee

I am having problem in finding the difference between between API 1 and API 2 mode of XBee . I have done my programming stuff and i have my master's thesis defence on Wednesday. I know how to use XBee but i am very weak in basics of RF. Please explain this difference in few basic words which i can speak in my thesis defense.
I personally don't like API mode 2 because it adds to the complexity of sending and receiving data unless you handle it at a low level of your serial driver.
The benefit of API mode 2 is that you can watch the byte stream and know that any 0x7E byte you see is definitely a "start of frame". With API mode 1, it's possible to see that byte within a frame's contents.
If you pick up a stream in the middle, you need to do additional work to verify you've found the start. It isn't very difficult to do, especially if you include a sanity check on the 16-bit frame length following the 0x7E start of frame. In most cases you'll be decoding complete frames and not need to look for the start of the next frame.
Since escaping also include XON and XOFF characters, I guess API mode 2 might be necessary if there are other devices in the serial stream (wired in between the XBee and the host sending/receiving frames) that can't handle those characters.
Edit to include details on API mode 2:
In either API mode, the byte 0x7E indicates start of frame.
In mode 2, if the following bytes appear inside the frame, they're replaced with an escaped two-byte sequence of 0x7D followed by the original byte XORed with 0x20:
byte in frame
escaped sequence
0x7E (start of frame)
0x7D 0x5E
0x7D (start of escape sequence)
0x7D 0x5D
0x13 (XOFF)
0x7D 0x33
0x11 (XON)
0x7D 0x31
Note that the frame length and checksum are based on the original, unescaped sequence of bytes. If you are writing code to handle escaping outbound frames and unescaping inbound frames, you want it to happen at a fairly low level of your serial driver.
everything is explained in here. good luck with your thesis
https://www.digi.com/resources/documentation/Digidocs/90001456-13/tasks/t_configure_operating_mode.htm?TocPath=XBee%20API%20mode%7COperating%20mode%20configuration%7C_____0

Where can I find the device specific JTAG instructions for Cortex-M3?

I'm trying to communicate with a Cortex-M3 based microcontroller (LPC1769) through JTAG. I already have the hardware required, and have managed to get an example program to work, but to progress further, I need to know the device-specific JTAG instructions that are available in this case. I have read the corresponding section of the Cortex-M3 technical reference manual (link), and all that told me, was that the device uses a standard CoreSight debug port. In particular, I'd like to read the device ID with the IDCODE instruction. Some sites suggest, that the IDCODE might be b0001 or b1110 for this device, but neither of them seem to work. b0001 seems more likely to me, as that's the value I read from the IR after the TAP has been reset.
I also considered the possibility, that the instruction I'm using is correct, and I'm just not reading the device ID register properly. I'm using an FTDI cable with the FT232H chip, and the application I'm using is based on FTDI's AN129 example code (link), using MPSSE commands. I use the 0x2A command to clock data in from the TAP, the 0x1B command to clock data out to the TAP, and the 0x3B command to do both simultaneously. If anyone could provide some insight, as to what I'm doing wrong (or whether I'm using the right IDCODE instruction at all), that would be much appreciated.
*EDIT:
I made some progress, but the IDCODE instruction still eludes me. I managed to read the Device ID after setting the TAP controller to Test-Logic-Reset state (which loads the IDCODE instruction in the IR). However, I tried all possible (16) instructions, and while some of them resulted in different reads from the DR, none loaded the Device ID register.
This is the function I use to insert the instruction, once the TAP controller is in Shift-IR state:
int clockOut(FT_HANDLE* ftHandle, BYTE data, BYTE length)
{
FT_STATUS ftStatus = FT_OK;
BYTE byOutputBuffer[1024]; // Buffer to hold MPSSE commands and data to be sent to the FT232H
DWORD dwNumBytesToSend = 0; // Index to the output buffer
DWORD dwNumBytesSent = 0; // Count of actual bytes sent - used with FT_Write
byOutputBuffer[dwNumBytesToSend++] = 0x1B;
// Clock data out through Shift-DR
byOutputBuffer[dwNumBytesToSend++] = length - 1;
// Number of clock pulses = (length - 1) + 1; This way, the length given as the parameter of the function is the actual number of clock pulses.
byOutputBuffer[dwNumBytesToSend++] = data;
// Shift out data
ftStatus = FT_Write(*ftHandle, byOutputBuffer, dwNumBytesToSend, &dwNumBytesSent);
// Send off the TMS command
return ftStatus;
}
The length parameter is set to 4, and the data parameter is set to 0x0X (where I tried all possible values for X, neither of which led to success)
I managed to get it to work. The problem was, that when I sent out 4 bits to the IR, it in fact received 5. After finishing the transmission, the next rising edge of TCK was supposed to change the state of the TAP controller, but as it was still in the Shift-IR state, it not only changed the state, but also sampled the TDI, and did another (fifth) shift. To countermand this, I only shifted the lower 3 bits of the instruction, and then used a 0x4B MPSSE command, to simultaneously clock out a TMS signal (to change the state), and send out the MSB of the command.

Usage of EN4B command

Can anybody explains the usage of EN4B command of micron SPI chips.
I want to know the difference between 3 byte and 4 byte address mode in SPI.
I was going through the SPI drivers where I found this commands.
Thanks in Advance !!
From a legacy point of view, SPI commands have always used 3 bytes for the address interested by their operation.
This was fine as with 24 bits it is possible to address up to 128MiB.
When the Flashes grew larger it was needed to switch from 3 bytes to 4 bytes addressing.
Whenever you have any doubts regarding the hardware you can find the answers in the proper datasheet, I don't know what specific chip you are referring to however.
I found the Micron N25Q512A NOR Flash, which is 512MiB so it needs a form of 4 bytes addressing; from it you can learn that
There are 3 bytes legacy commands and new 4 bytes commands.
For example 03h and 13h for the single read.
You can supply a default fourth address byte with a specific register.
The Extended Address Register let you choose the region of the flash for the legacy commands.
You can enable 4 bytes addressing for legacy command.
Either write the appropriate bit in the Nonvolatile Configuration Register or use the ENTER / EXIT 4-BYTE ADDRESS MODE (opcodes B7h and E9h respectively)
This Linux patch also have some insights, basically telling that some chips only support one of the three points above.
Macronix seems to have first opted for the number 3 only and Spansion for the number 1.
Checking some datasheet of theirs seems to suggests that now both support all three methods.

c166 bootloader write to internal flash

I'm writing a bootloader for a c166 chip, to be exact, the 169FH. The bootloader can currently open a TCP/IP Connection so a PC can send an Intel hex file to the bootloader. This intel hex file is saved in the RAM. After receiving the hex file it is read line by line to set the bytes to the correct location in the flash. The flash location where the bootloader is stored is ofcourse different from where the main program can be saved.
This are the first two lines of the intel hex file:
:0200000400C03A
:20100000CA11EE111212341258127A129A12BC12DE12001322134413601388138813881349
The first line is to get the highest 16 bits of the 32bit flash address, which is in this case 0x00C0
in the second line are the lower 16 bits of the 32 bit flash address, which is in this case 0x1000. This creates to total address of 0x00C01000, the byte written to that address should be 0xCA.
I'm trying to write the byte to that address using the following code:
uint8_t u8Byte = (uint8_t )XHugeStringToDec((const __huge char *)&Ext_RamFlashSave.Byte[u32Idx], 9+(u8ByteIdx*2), 2);
uint8_t *u8Address = (uint8_t*)((uint32_t)(u32ExtendedLinearAddress << 16) + (uint32_t)u16BaseAddress + (uint32_t)u8ByteIdx);
*u8Address = (u8Byte);
XHugeStringToDec() is a function to get the hexadecimal value from the intel hex string. I know this is going correct.
Ext_RamFlashSave.Byte is the array where the intel hex file is storedin.
The u32ExtendedLinearAddress variable is 0x0C00, and is set earlier. The u16BaseAddress is 0x1000 and is also set earlier in code.
The problem is in the last line:
*u8Address = (u8Byte);
I have verified that u8Address is indeed 0x0C01000 and u8Byte is indeed 0xCA. But when I monitor my flash address, I do not see the byte written.
I can imagine that it is some kind of write protection, but I cannot find out how to work around this, or do I need a different way to write to the Flash address?
More info of how the intel-hex file is build is described here:
https://en.wikipedia.org/wiki/Intel_HEX
I am not familier with the chip you said.
But for writing to flash, Generally, following algorithm is used:
Unlock the flash. This is usually sending specific data sequence. The flash I use right now has 0xA0A0-> delay of 1msec -> 0x0505. This will enable writing to flash
Clear the error flags. Flash has some error flags like Write error or read error. Also, check for Busy flag to make sure Flash is ready
Erase the Page. Yes... You have to erase the page before writing to it. Even if you want to change a single bit, You have to erase entire page.
Write the data (Finally) But make sure that endien-ness is correct. Sometimes you Controller is Little Endien and Flash is Big Endien. So you have to match to Flash's.
Lock the Flash. This is usually done by sending same sequence which used for unlocking. (Refer Datasheet)
You cannot write to a flash directly... Have to go through entire process. That's why Flash are slow.
Some flash support 8bit writing while some only 16bit or 32 bit etc. So, you have to send those many bits while writing.
When you have to modify a small portion of a page, Read the page in a buffer. Modify data in the buffer. Erase the page and write entire buffer.
If you are modifying a

Need some help on Serial Port Transmission

i'm currently using DS89C450 MCU on Keil C51 Programming.
I have an Infrared Receiver attached to P3^2 which is the falling edge trigger. Whenever I press a key on the remote control, it will trigger the interrupt and save it into the xdata X or Y (bit by bit then byte by byte for 500 bytes).
I'm trying to transmit the data bit (either '1' or '0') from the buffer to the hyperterminal via Serial Port. However, I do not get any data displayed when I press the remote control.
Can anyone expert tell me why and how do i get it to work?
The program is here:
http://pastebin.com/hpAw2ipH
Google "Terminal by br#y", it can show serial comms in HEX. Most UARTs cannot send a single bit, rather they will send a character of N bits, usually 7 or 8, with start/stop/parity bits (8-bits, no parity, 1 stop bit being the universal default). It can make life easier to encode data as ASCII, perhaps even with start/stop characters, so you know when you're getting real data.
For even more detail, use an oscilloscope, BusPirate or LogicSniffer (from DangerousPrototypes.com) to sniff the communications data.

Resources