I'm attempting to bit bang a sequence of data from an ATSAME70Q21 to a series of shift registers (TLC5971 LED drivers). My production hardware will likely make use of hardware SPI for this purpose, but my current prototype is making use of GPIO pins so I'd like to use bit banging.
I'm fairly comfortable in arranging the data in the correct sequence as per the TLC5971 datasheet, and in shifting each bit at a time. I have an oscilloscope connected to both the DATA (BBDAT) and CLOCK (BBCLK) lines, which has illustrated something not quite right with the function.
If I simply toggle each of the BBDAT and BBCLK lines HIGH and then immediately LOW again, I get a very stable 1.85MHz signal on both lines on the oscilloscope. However, when I loop through the data structure, setting DATA either HIGH or LOW depending on the bit value, and then toggling CLOCK HIGH/LOW, the resulting signal frequency on BBDAT is around an order of magnitude lower than that of BBCLK. Given that I only toggle BBCLK once at the end of each BBDAT set/clear, I'm not sure why this would be the case.
void writeData(void)
{
Pio *baseData = (Pio *)(uintptr_t)PIOD;
Pio *baseClock = (Pio *)(uintptr_t)PIOB;
//I have 3 TLC5971 ICs in series, hence i < 3
for (uint i = 0, i < 3; i++) {
//Each packet for each TLC5971 IC is 28 bytes long
for (uint j = 0; j < 28; j++) {
for (uint k = 0; k < 8; k++) {
if (dataPacket[i][j] & (1 << k)) {
//If bit is 1, Set DATA line (BBDAT)
baseData->PIO_SODR = 1U << (BBDAT & 0x1F);
} else {
//If bit is 0, Clear DATA line (BBDAT)
baseData->PIO_CODR = 1U << (BBDAT & 0x1F);
}
//Toggle the CLOCK line (BBCLK)
baseClock->PIO_SODR = 1U << (BBCLK & 0x1F);
baseClock->PIO_CODR = 1U << (BBCLK & 0x1F);
}
}
}
The above code (with no optimisation) produces a BBDAT signal of ~100kHz and a BBCLK signal of ~1.4MHz. What am I missing here?
Related
I'm trying to use a temperature sensor(PCT2075) by MSP430F249
To get a temperature, I get a 2bytes from this sensor.
I wrote a code from this link.
https://e2e.ti.com/support/microcontrollers/msp430/f/166/t/589712?MSP430FR5969-Read-multiple-bytes-of-data-i2c-with-repeated-start-and-without-interrupts
I'm using MSP430F249. so I modified a code from this link.
Howerver, I got just two same value. I think that it is MSByte.
Is there any way to get 2bytes from sensor.
my code here
void i2c_read_multi(uint8_t slv_addr, uint8_t reg_addr, uint8_t l, uint8_t *arr)
{
uint8_t i;
while(UCB0STAT & UCBBUSY);
UCB0I2CSA = slv_addr; // set slave address
UCB0CTL1 |= UCTR | UCTXSTT; // transmitter mode and START condition.
while(UCB0CTL1 & UCTXSTT);
UCB0TXBUF = reg_addr;
while(!(UCB0CTL1 & UCTXSTT));
UCB0CTL1 &= ~UCTR; // receiver mode
UCB0CTL1 |= UCTXSTT; // START condition
while(UCB0CTL1 & UCTXSTT); // make sure start has been cleared
for (i = 0; i < l; i++) {
while(!(IFG2 & UCB0RXIFG));
if(i == l - 1){
UCB0CTL1 |= UCTXSTP; // STOP condition
}
arr[i] = UCB0RXBUF;
}
while(UCB0CTL1 & UCTXSTP);
}
There are two issues ...
The linked to code assumes that the port only needs to read one byte for each output value.
But, based on the sensor documentation you've shown, for each value output to the array, we need to read two bytes (one for MSB and one for LSB).
And, we need to merge those two byte values into one 16 bit value. Note that arr is now uint16_t instead of uint8_t. And, l is now the number of [16 bit] samples (vs. number of bytes). So, the caller of this may need to be adjusted accordingly.
Further, note that we have to "ignore" the lower 5 bits of lsb. We do that by shifting the 16 bit value right by 5 bits (e.g. val16 >>= 5). I assume that's the correct way to do it. Or, it could be just val16 &= ~0x1F [less likely]. You may have to experiment a bit.
Here's the refactored code.
Note that this assumes the data arrives in "big endian" order [based on my best guess]. If it's actually little endian, reverse the msb = and lsb = statements.
Also, the placement of the "STOP" condition code may need to be adjusted. I had to guess as to whether it should be placed above the LSB read or MSB read.
I chose LSB--the last byte because that's closest to how the linked general i2c read is done. (i.e.) i2c doesn't know about or care about the MSB/LSB multiplexing of the device in question. It wants the STOP just before the last byte [not the 16 bit sample].
void
i2c_read_multi(uint8_t slv_addr, uint8_t reg_addr, uint8_t l,
uint16_t *arr)
{
uint8_t i;
uint8_t msb;
uint8_t lsb;
uint16_t val16;
while (UCB0STAT & UCBBUSY);
// set slave address
UCB0I2CSA = slv_addr;
// transmitter mode and START condition.
UCB0CTL1 |= UCTR | UCTXSTT;
while (UCB0CTL1 & UCTXSTT);
UCB0TXBUF = reg_addr;
while (!(UCB0CTL1 & UCTXSTT));
// receiver mode
UCB0CTL1 &= ~UCTR;
// START condition
UCB0CTL1 |= UCTXSTT;
// make sure start has been cleared
while (UCB0CTL1 & UCTXSTT);
for (i = 0; i < l; i++) {
while (!(IFG2 & UCB0RXIFG));
msb = UCB0RXBUF;
while (!(IFG2 & UCB0RXIFG));
// STOP condition
if (i == l - 1) {
UCB0CTL1 |= UCTXSTP;
}
lsb = UCB0RXBUF;
val16 = msb;
val16 <<= 8;
val16 |= lsb;
// use only most 11 significant bits
// NOTE: this _may_ not be the correct way to scale the data
val16 >>= 5;
arr[i] = val16;
}
while (UCB0CTL1 & UCTXSTP);
}
I am working on a library for controlling the M95128-W EEPROM from an STM32 device. I have the library writing and reading back data however the first byte of each page it not as expected and seems to be fixed at 0x04.
For example I write 128 bytes across two pages starting at 0x00 address with value 0x80. When read back I get:
byte[0] = 0x04;
byte[1] = 0x80;
byte[2] = 0x80;
byte[3] = 0x80;
.......
byte[64] = 0x04;
byte[65] = 0x80;
byte[66] = 0x80;
byte[67] = 0x80;
I have debugged the SPI with a logic analyzer and confirmed the correct bytes are being sent. When using the logic analyzer on the read command the mysterios 0x04 is transmitted from the EEPROM.
Here is my code:
void FLA::write(const void* data, unsigned int dataLength, uint16_t address)
{
int pagePos = 0;
int pageCount = (dataLength + 64 - 1) / 64;
int bytePos = 0;
int startAddress = address;
while (pagePos < pageCount)
{
HAL_GPIO_WritePin(GPIOB,GPIO_PIN_2, GPIO_PIN_SET); // WP High
chipSelect();
_spi->transfer(INSTRUCTION_WREN);
chipUnselect();
uint8_t status = readRegister(INSTRUCTION_RDSR);
chipSelect();
_spi->transfer(INSTRUCTION_WRITE);
uint8_t xlow = address & 0xff;
uint8_t xhigh = (address >> 8);
_spi->transfer(xhigh); // part 1 address MSB
_spi->transfer(xlow); // part 2 address LSB
for (unsigned int i = 0; i < 64 && bytePos < dataLength; i++ )
{
uint8_t byte = ((uint8_t*)data)[bytePos];
_spi->transfer(byte);
printConsole("Wrote byte to ");
printConsoleInt(startAddress + bytePos);
printConsole("with value ");
printConsoleInt(byte);
printConsole("\n");
bytePos ++;
}
_spi->transfer(INSTRUCTION_WRDI);
chipUnselect();
HAL_GPIO_WritePin(GPIOB,GPIO_PIN_2, GPIO_PIN_RESET); //WP LOW
bool writeComplete = false;
while (writeComplete == false)
{
uint8_t status = readRegister(INSTRUCTION_RDSR);
if(status&1<<0)
{
printConsole("Waiting for write to complete....\n");
}
else
{
writeComplete = true;
printConsole("Write complete to page ");
printConsoleInt(pagePos);
printConsole("# address ");
printConsoleInt(bytePos);
printConsole("\n");
}
}
pagePos++;
address = address + 64;
}
printConsole("Finished writing all pages total bytes ");
printConsoleInt(bytePos);
printConsole("\n");
}
void FLA::read(char* returndata, unsigned int dataLength, uint16_t address)
{
chipSelect();
_spi->transfer(INSTRUCTION_READ);
uint8_t xlow = address & 0xff;
uint8_t xhigh = (address >> 8);
_spi->transfer(xhigh); // part 1 address
_spi->transfer(xlow); // part 2 address
for (unsigned int i = 0; i < dataLength; i++)
returndata[i] = _spi->transfer(0x00);
chipUnselect();
}
Any suggestion or help appreciated.
UPDATES:
I have tried writing sequentially 255 bytes increasing data to check for rollover. The results are as follows:
byte[0] = 4; // Incorrect Mystery Byte
byte[1] = 1;
byte[2] = 2;
byte[3] = 3;
.......
byte[63] = 63;
byte[64] = 4; // Incorrect Mystery Byte
byte[65] = 65;
byte[66] = 66;
.......
byte[127] = 127;
byte[128] = 4; // Incorrect Mystery Byte
byte[129} = 129;
Pattern continues. I have also tried writing just 8 bytes from address 0x00 and the same problem persists so I think we can rule out rollover.
I have tried removing the debug printConsole and it has had no effect.
Here is a SPI logic trace of the write command:
And a close up of the first byte that is not working correctly:
Code can be viewed on gitlab here:
https://gitlab.com/DanielBeyzade/stm32f107vc-home-control-master/blob/master/Src/flash.cpp
Init code of SPI can be seen here in MX_SPI_Init()
https://gitlab.com/DanielBeyzade/stm32f107vc-home-control-master/blob/master/Src/main.cpp
I have another device on the SPI bus (RFM69HW RF Module) which works as expected sending and receiving data.
The explanation was actually already given by Craig Estey in his answer. You do have a rollover. You write full page and then - without cycling the CS pin - you send INSTRUCTION_WRDI command. Guess what's the binary code of this command? If you guessed that it's 4, then you're absolutely right.
Check your code here:
chipSelect();
_spi->transfer(INSTRUCTION_WRITE);
uint8_t xlow = address & 0xff;
uint8_t xhigh = (address >> 8);
_spi->transfer(xhigh); // part 1 address MSB
_spi->transfer(xlow); // part 2 address LSB
for (unsigned int i = 0; i < 64 && bytePos < dataLength; i++ )
{
uint8_t byte = ((uint8_t*)data)[bytePos];
_spi->transfer(byte);
// ...
bytePos ++;
}
_spi->transfer(INSTRUCTION_WRDI); // <-------------- ROLLOEVER!
chipUnselect();
With these devices, each command MUST start with cycling CS. After CS goes low, the first byte is interpreted as command. All remaining bytes - until CS is cycled again - are interpreted as data. So you cannot send multiple commands in a single "block" with CS being constantly pulled low.
Another thing is that you don't need WRDI command at all - after the write instruction is terminated (by CS going high), the WEL bit is automatically reset. See page 18 of the datasheet:
The Write Enable Latch (WEL) bit, in fact, becomes reset by any of the
following events:
• Power-up
• WRDI instruction execution
• WRSR instruction completion
• WRITE instruction completion.
Caveat: I don't have a definitive solution, just some observations and suggestions [that would be too large for a comment].
From 6.6: Each time a new data byte is shifted in, the least significant bits of the internal address counter are incremented. If more bytes are sent than will fit up to the end of the page, a condition known as “roll-over” occurs. In case of roll-over, the bytes exceeding the page size are overwritten from location 0 of the same page.
So, in your write loop code, you do: for (i = 0; i < 64; i++). This is incorrect in the general case if the LSB of address (xlow) is non-zero. You'd need to do something like: for (i = xlow % 64; i < 64; i++)
In other words, you might be getting the page boundary rollover. But, you mentioned that you're using address 0x0000, so it should work, even with the code as it exists.
I might remove the print statements from the loop as they could have an effect on the serialization timing.
I might try this with an incrementing data pattern: (e.g.) 0x01,0x02,0x03,... That way, you could see which byte is rolling over [if any].
Also, try writing a single page from address zero, and write less than the full page size (i.e. less that 64 bytes) to guarantee that you're not getting rollover.
Also, from figure 13 [the timing diagram for WRITE], it looks like once you assert chip select, the ROM wants a continuous bit stream clocked precisely, so you may have a race condition where you're not providing the data at precisely the clock edge(s) needed. You may want to use the logic analyzer to verify that the data appears exactly in sync with clock edge as required (i.e. at clock rising edge)
As you've probably already noticed, offset 0 and offset 64 are getting the 0x04. So, this adds to the notion of rollover.
Or, it could be that the first data byte of each page is being written "late" and the 0x04 is a result of that.
I don't know if your output port has a SILO so you can send data as in a traditional serial I/O port or do you have to maintain precise bit-for-bit timing (which I presume the _spi->transfer would do)
Another thing to try is to write a shorter pattern (e.g. 10 bytes) starting at a non-zero address (e.g. xhigh = 0; xlow = 4) and the incrementing pattern and see how things change.
UPDATE:
From your update, it appears to be the first byte of each page [obviously].
From the exploded view of the timing, I notice SCLK is not strictly uniform. The pulse width is slightly erratic. Since the write data is sampled on the clock rising edge, this shouldn't matter. But, I wonder where this comes from. That is, is SCLK asserted/deasserted by the software (i.e. transfer) and SCLK is connected to another GPIO pin? I'd be interested in seeing the source for the transfer function [or a disassembly].
I've just looked up SPI here: https://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus and it answers my own question.
From that, here is a sample transfer function:
/*
* Simultaneously transmit and receive a byte on the SPI.
*
* Polarity and phase are assumed to be both 0, i.e.:
* - input data is captured on rising edge of SCLK.
* - output data is propagated on falling edge of SCLK.
*
* Returns the received byte.
*/
uint8_t SPI_transfer_byte(uint8_t byte_out)
{
uint8_t byte_in = 0;
uint8_t bit;
for (bit = 0x80; bit; bit >>= 1) {
/* Shift-out a bit to the MOSI line */
write_MOSI((byte_out & bit) ? HIGH : LOW);
/* Delay for at least the peer's setup time */
delay(SPI_SCLK_LOW_TIME);
/* Pull the clock line high */
write_SCLK(HIGH);
/* Shift-in a bit from the MISO line */
if (read_MISO() == HIGH)
byte_in |= bit;
/* Delay for at least the peer's hold time */
delay(SPI_SCLK_HIGH_TIME);
/* Pull the clock line low */
write_SCLK(LOW);
}
return byte_in;
}
So, the delay times need be at least the ones the ROM needs. Hopefully, you can verify that is the case.
But, I also notice that on the problem byte, the first bit of the data appears to lag its rising clock edge. That is, I would want the data line to be stabilized before clock rising edge.
But, that assumes CPOL=0,CPHA=1. Your ROM can be programmed for that mode or CPOL=0,CPHA=0, which is the mode used by the sample code above.
This is what I see from the timing diagram. It implies that the transfer function does CPOL=0,CPHA=0:
SCLK
__
| |
___| |___
DATA
___
/ \
/ \
This is what I originally expected (CPOL=0,CPHA=1) based on something earlier in the ROM document:
SCLK
__
| |
___| |___
DATA
___
/ \
/ \
The ROM can be configured to use either CPOL=0,CPHA=0 or CPOL=1,CPHA=1. So, you may need to configure these values to match the transfer function (or vice-versa) And, verify that the transfer function's delay times are adequate for your ROM. The SDK may do all this for you, but, since you're having trouble, double checking this may be worthwhile (e.g. See Table 18 et. al. in the ROM document).
However, since the ROM seems to respond well for most byte locations, the timing may already be adequate.
One thing you might also try. Since it's the first byte that is the problem, and here I mean first byte after the LSB address byte, the memory might need some additional [and undocumented] setup time.
So, after the transfer(xlow), add a small spin loop after that before entering the data transfer loop, to give the ROM time to set up for the write burst [or read burst].
This could be confirmed by starting xlow at a non-zero value (e.g. 3) and shortening the transfer. If the problem byte tracks xlow, that's one way to verify that the setup time may be required. You'd need to use a different data value for each test to be sure you're not just reading back a stale value from a prior test.
I have a set of c-like snippets provided that describe a CRC algorithm, and this article that explains how to transform a serial implementation to parallel that I need to implement in Verilog.
I tried using multiple online code generators, both serial and parallel (although serial would not work in final solution), and also tried working with the article, but got no similar results to what these snippets generate.
I should say I'm more or less exclusively hardware engineer and my understanding of C is rudimentary. I also never worked with CRC other than straightforward shift register implementation. I can see the polynomial and initial value from what I have, but that is more or less it.
Serial implementation uses augmented message. Should I also create parallel one for 6 bits wider message and append zeros to it?
I do not understand too well how the final value crc6 is generated. CrcValue is generated using the CalcCrc function for the final zeros of augmented message, then its top bit is written to its place in crc6 and removed before feeding it to the function again. Why is that? When working the algorithm to get the matrices for the parallel implementation, I should probably take crc6 as my final result, not last value of CrcValue?
Regardless of how crc6 is obtained, in the snippet for CRC check only runs through the function. How does that work?
Here are the code snippets:
const unsigned crc6Polynom =0x03; // x**6 + x + 1
unsigned CalcCrc(unsigned crcValue, unsigned thisbit) {
unsigned m = crcValue & crc6Polynom;
while (m > 0) {
thisbit ^= (m & 1);
m >>= 1;
return (((thisbit << 6) | crcValue) >> 1);
}
}
// obtain CRC6 for sending (6 bit)
unsigned GetCrc(unsigned crcValue) {
unsigned crc6 = 0;
for (i = 0; i < 6; i++) {
crcValue = CalcCrc(crcValue, 0);
crc6 |= (crcValue & 0x20) | (crc6 >> 1);
crcValue &= 0x1F; // remove output bit
}
return (crc6);
}
// Calculate CRC6
unsigned crcValue = 0x3F;
for (i = 1; i < nDataBits; i++) { // Startbit excluded
unsigned thisBit = (unsigned)((telegram >> i) & 0x1);
crcValue = CalcCrc(crcValue, thisBit);
}
/* now send telegram + GetCrc(crcValue) */
// Check CRC6
unsigned crcValue = 0x3F;
for (i = 1; i < nDataBits+6; i++) { // No startbit, but with CRC
unsigned thisBit = (unsigned)((telegram >> i) & 0x1);
crcValue = CalcCrc(crcValue, thisBit);
}
if (crcValue != 0) { /* put error handler here */ }
Thanks in advance for any advice, I'm really stuck there.
xoring bits of the data stream can be done in parallel because only the least signficant bit is used for feedback (in this case), and the order of the data stream bit xor operations doesn't affect the result.
Whether the hardware would need a parallel version depends on how a data stream is handled. The hardware could calculate the CRC one bit at a time during transmission or reception. If the hardware is staged to work with 6 bit characters, then a parallel version would make sense.
Since the snippets use a right shift for the CRC, it would seem that data for each 6 bit character is transmitted and received least significant bit first, to allow for hardware that could calculate CRC 1 bit at a time as it's transmitted or received. After all 6 bit data characters are transmitted, then the 6 bit CRC is transmitted (also least significant bit first).
The snippets seem wrong. My guess at what they should be:
/* calculate crc6 1 bit at a time */
const unsigned crc6Polynom =0x43; /* x**6 + x + 1 */
unsigned CalcCrc(unsigned crcValue, unsigned thisbit) {
crcValue ^= thisbit;
if(crcValue&1)
crcValue ^= crc6Polynom;
crcValue >>= 1;
return crcValue;
}
Example for passing 6 bits at a time. A 64 by 6 bit table lookup could be used to replace the for loop.
/* calculate 6 bits at a time */
unsigned CalcCrc6(unsigned crcValue, unsigned sixbits) {
int i;
crcValue ^= sixbits;
for(i = 0; i < 6; i++){
if(crcValue&1)
crcValue ^= crc6Polynom;
crcValue >>= 1;
}
return crcValue;
}
Assume that telegram contains 31 bits, 1 start bit + 30 data bits (five 6 bit characters):
/* code to calculate crc 6 bits at a time: */
unsigned crcValue = 0x3F;
int i;
telegram >>= 1; /* skip start bit */
for (i = 0; i < 5; i++) {
crcValue = CalcCrc6(unsigned crcValue, telegram & 0x3f);
telegram >>= 6;
}
Currently I meet one technique issue, which makes me want to improve the previous implementation, the situation is:
I have 5 GPIO pins, I need use these pins as the hardware identifier, for example:
pin1: LOW
pin2: LOW
pin3: LOW
pin4: LOW
pin5: LOW
this means one of my HW variants, so we can have many combinations. In previous design, the developer use if-else to implement this, just like:
if(PIN1 == LOW && ... && ......&& PIN5 ==LOW)
{
HWID = variant1;
}
else if( ... )
{
}
...
else
{
}
but I think this is not good because it will have more than 200 variants, and the code will become to long, and I want changed it to a mask. The idea is I treat this five pins as a five bits register, and because I can predict which variant I need to assign according to GPIOs status(this already defined by hardware team, they provide a variant list, with all these GPIO pins configuration), therefore, the code may look like this:
enum{
variant 0x0 //GPIO config 1
...
variant 0xF3 //GPIO config 243
}
then I can first read these five GPIO pins status, and compare to some mask to see if they are equal or not.
Question
However, for GPIO, it has three status, namely: LOW, HIGH, OPEN. If there is any good calculation method to have a 3-D mask?
You have 5 pins of 3 states each. You can approach representing this in a few ways.
First, imagine using this sort of framework:
#define LOW (0)
#define HIGH (1)
#define OPEN (2)
uint16_t config = PIN_CONFIG(pin1, pin2, pin3, pin4, pin5);
if(config == PIN_CONFIG(LOW, HIGH, OPEN, LOW, LOW))
{
// do something
}
switch(config) {
case PIN_CONFIG(LOW, HIGH, OPEN, LOW, HIGH):
// do something;
break;
}
uint16_t config_max = PIN_CONFIG(OPEN, OPEN, OPEN, OPEN, OPEN);
uint32_t hardware_ids[config_max + 1] = {0};
// init your hardware ids
hardware_ids[PIN_CONFIG(LOW, HIGH, HIGH, LOW, LOW)] = 0xF315;
hardware_ids[PIN_CONFIG(LOW, LOW, HIGH, LOW, LOW)] = 0xF225;
// look up a HWID
uint32_t hwid = hardware_ids[config];
This code is just the sort of stuff you'd like to do with pin configurations. The only bit left to implement is PIN_CONFIG
Approach 1
The first approach is to keep using it as a bitfield, but instead of 1 bit per pin you use 2 bits to represent each pin state. I think this is the cleanest, even though you're "wasting" half a bit for each pin.
#define PIN_CLAMP(x) ((x) & 0x03)
#define PIN_CONFIG(p1, p2, p3, p4, p5) \\
(PIN_CLAMP(p1) & \\
(PIN_CLAMP(p2) << 2) & \\
(PIN_CLAMP(p3) << 4) & \\
(PIN_CLAMP(p4) << 6) & \\
(PIN_CLAMP(p5) << 8))
This is kind of nice because it leaves room for a "Don't care" or "Invalid" value if you are going to do searches later.
Approach 2
Alternatively, you can use arithmetic to do it, making sure you use the minimum amount of bits necessary. That is, ~1.5 bits to encode 3 values. As expected, this goes from 0 up to 242 for a total of 3^5=243 states.
Without knowing anything else about your situation I believe this is the smallest complete encoding of your pin states.
(Practically, you have to use 8 bits to encode 243 values, so it's higher 1.5 bits per pin)
#define PIN_CLAMP(x) ((x) % 3) /* note this should really assert */
#define PIN_CONFIG(p1, p2, p3, p4, p5) \\
(PIN_CLAMP(p1) & \\
(PIN_CLAMP(p2) * 3) & \\
(PIN_CLAMP(p3) * 9) & \\
(PIN_CLAMP(p4) * 27) & \\
(PIN_CLAMP(p5) * 81))
Approach 1.1
If you don't like preprocessor stuff, you could use functions a bit like this:
enum PinLevel (low = 0, high, open);
void set_pin(uint32_t * config, uint8_t pin_number, enum PinLevel value) {
int shift = pin_number * 2; // 2 bits
int mask = 0x03 << shift; // 2 bits set to on, moved to the right spot
*config &= ~pinmask;
*config |= (((int)value) << shift) & pinmask;
}
enum PinLevel get_pin(uint32_t config, uint8_t pin_number) {
int shift = pin_number * 2; // 2 bits
return (enum PinLevel)((config >> shift) & 0x03);
}
This follows the first (2 bit per value) approach.
Approach 1.2
YET ANOTHER WAY using C's cool bitfield syntax:
struct pins {
uint16_t pin1 : 2;
uint16_t pin2 : 2;
uint16_t pin3 : 2;
uint16_t pin4 : 2;
uint16_t pin5 : 2;
};
typedef union pinconfig_ {
struct pins pins;
uint16_t value;
} pinconfig;
pinconfig input;
input.value = 0; // don't forget to init the members unless static
input.pins.pin1 = HIGH;
input.pins.pin2 = LOW;
printf("%d", input.value);
input.value = 0x0003;
printd("%d", input.pins.pin1);
The union lets you view the bitfield as a number and vice versa.
(note: all code completely untested)
This is my suggestion to solve the problem
#include<stdio.h>
#define LOW 0
#define HIGH 1
#define OPEN 2
#define MAXGPIO 5
int main()
{
int gpio[MAXGPIO] = { LOW, LOW, OPEN, HIGH, OPEN };
int mask = 0;
for (int i = 0; i < MAXGPIO; i++)
mask = mask << 2 | gpio[i];
printf("Masked: %d\n", mask);
printf("Unmasked:\n");
for (int i = 0; i < MAXGPIO; i++)
printf("GPIO %d = %d\n", i + 1, (mask >> (2*(MAXGPIO-1-i))) & 0x03);
return 0;
}
A little explanation about the code.
Masking
I am using 2 bits to save each GPIO value. The combinations are:
00: LOW
01: HIGH
02: OPEN
03 is Invalid
I am iterating the array gpio (where I have the acquired values) and creating a mask in the mask variable shifting left 2 bits and applying an or operation.
Unmasking
To get the initial values I am just making the opposite operation shifting right 2 bits multiplied by the amount of GPIO - 1 and masking with 0x03
I am applying a mask with 0x03 because those are the bit I am interested.
This is the result of the program
$ cc -Wall test.c -o test;./test
Masked: 38
Unmasked:
GPIO 1 = 0
GPIO 2 = 0
GPIO 3 = 2
GPIO 4 = 1
GPIO 5 = 2
Hope this helps
I've been stuck on this all day, I'm trying to create a count down timer using two seven segment displays. I want it to start at 20 and count down to zero. While 10< I only want to have the left display on(i.e no 0 in the tens place). I'm using an Atmega 324A. I have all of port C connected to the display segments and am using PIND0 to toggle between the two. Here is what I have so far.
#include <avr/io.h>
#include<util/delay.h>
int main(void) {
int prescale = (8000000/8)/1000-1;
int digit = 1;
uint8_t display;
int seven_seg = {0x3F,0X06,0X5B,0X4F,0X66,0X6D,0X7D,0C07,0X7F,0X6F};
// Set OC1 to output
DDRD = (1<<0);
DDRC = 0xFF;
OCR1A = prescale;
//clear counter on compare match
TCCR1A = (0<<COM1A1) | (1<<COM1A0);
//Set Prescale and CTC Mode
TCCR1B = (0<<CS12) | (1<<CS11) | (0<<CS10) | (0<<WGM13) | (1<<WGM12);
while(1) {
display++;
if(display>50) display = 0;
for (i = 250; i>0; i--){
PORTD ^= 0<<PIND0;
PORTC = seven_seg[display%10];
PORTD ^= 1<<PIND0;
_delay_ms(100);
for (i = 250; i>0; i--){
PORTD ^= 1<<PIND0;
PORTC = seven_seg[display/10];
PORTD ^= 0<<PIND0;
_delay_ms(100);
}
}
while((TIFR1 & (1<<OCF1A)) == 0) {}
TIFR1 &= (1 << OCF1A);
}
}
All this does is set both displays to 0. Do I need another for loop to iterate through the seven_seg[] array while it's doing this? really not sure how to tackle this one. Any help would be great.
you make 2 big faults:
you don't use timer
you should separate display-driving-logic from value-generating-logic
best thing would be you split the tasks and plan how to implement this.
Task one: providing the data to display
Task two: transfering that data to a display friendly representation
Task three: the aktual displaying of that data
Task one is Easy. lets assume you want to display integers and you have three 7-seg-disps.
So task one is to provide some Data to display.
int16_t numberToDisplay = 234;
Task two is also not that hard. a display friendly representation would be one byte per display element.
#define NUM_7SEGS 3
volatile uint8_t dispData[NUM_7SEGS]; // volatile since it is be accassed by different contexts
now we need some mechanism that transfers the input value to the display data
void val2DispData(int16 val)
{
uint8_t i;
for(i=NUM_7SEGS; i; --i){
uint8_t r = (uint8_t)(val%10);
val /= 10
dispData[i-1] = seven_seg[r];
}
}
fine and now?
Task three is the most difficult one. we need someone who says the output what to do.
Since the want to multiplex the 3 display elements that means:
deactivate the current display element
put the data of the next digit to the outport
activate the next display element
wait a bit.
and this 4 steps we want to do very fast so that the observer does not recognize that only one element is active at a time.
since this is totally independent of the other program logic, we need to do this in "background".
your main program flow simply calls that function and the background timer ISR worrries about displaying.
So we have to set up a timer and call the switching of element data in its interrupt service routine. (for setting up timer and timer interupts please refer another tutorial)
// this have to be called cyclic from timer isr
// frequency is not that important but should be at
// least NUM_7SEGS * 200 Hz to not look ugly
void cyclicDisplayTask()
{
static uint8_t currentElement = 0;
// disable all elements
PORTD = 0;
// put data on the port
PORTC = dispData[currentElement]; // this is why the volatile is necessary. without the compiler would not notice that values may be changed by the main program flow
// enable next element
PORTD = (1<<currentElement);
++currentElement;
if(currentElement>=NUM_7SEGS){
currentElement = 0;
}
}
of course you have to adapt the enabling of specific display elements to your hardware.
please also note that you may youse a transistor to drive the element. an AVR port pin is strong enogh to drive a single segment but the other side that drives the common anodes/cathodes of the segment may be overload. This of course depends on the leds within the segments. if this are low current leds (~2mA) it is ok.