The idea is to read any bit from a port.
Anyway accessing to one known bit is simple, like
P0_0 <-- gets bit 0 from port 0
But if i need to access bit y via function?
read_bit(__bit y){
return P0_y; // <-- just an idea but its not right becouse of syntax.
}
using SDCC to program and 8051 header.
If it's a literal constant, you can use a macro trick:
#define READ_P0_BIT(BIT) (P0_ ## BIT)
unsigned x = READ_P0_BIT(1);
If it's not a literal constant, you can do this:
int readP0bit(int bitNo)
{
switch (bitNo)
{
case 0: return P0_0;
case 1: return P0_1;
// ...
case 7: return P0_7;
default: return 0;
}
}
You can make a local array-variable that contains the bits in the function, and use the "bit" as an index into this array.
Something like:
__bit read_bit(const int b)
{
__bit all_bits[8] = {
P0_0,
P0_1,
/* etc. */
P0_7
};
return (b < 8 ? all_bits[b] : 0);
}
just see this function
char chek_bit_p0(unsigned char chk_bit){
if((P0>>chk_bit) & 1)
return 1;
else
return 0;
}
or simply by a macro like below (preferred way)
#define chek_bit_p0(x) (((P0>>x)&1)&&1)
Related
I want to write a function for my AVR ATmega328 that debounces switches using state space to confirm a switch press. After finishing it I wanted to generalize my function so that I may reuse it in the future with little work, but that involves passing the pin I want to use as a function parameter, and I just can't get that to work.
This is what I have now:
int debounceSwitch(unsigned char *port, uint8_t mask)
{
int n = 0;
while (1)
{
switch (n)
{
case 0: //NoPush State
_delay_ms(30);
if(!(*port & (1<<mask))){n = n + 1;}
else {return 0;}
break;
case 1: //MaybePush State
_delay_ms(30);
if(!(*port & (1<<mask))){n = n + 1;}
else {n = n - 1;}
break;
case 2: //YesPush State
_delay_ms(30);
if(!(*port & (1<<mask))){return 1;}
else {n = n - 1;}
break;
}
}
}
I have a hunch my issue is with the data type I'm using as the parameter, and I seem to have gotten different answers online.
Any help would be appreciated!
Well in AVR ports are special IO registers and they are accessed using IN and OUT instructions. Not like memory using LDR etc.
From the port definition you can see that you need to make the port pointer volatile. which the compiler would have also told you as a warning when you would had tried to pass PORT to the function.
#define PORTB _SFR_IO8(0x05)
which maps to
#define _SFR_IO8(io_addr) _MMIO_BYTE((io_addr) + __SFR_OFFSET)
#define _MMIO_BYTE(mem_addr) (*(volatile uint8_t *)(mem_addr))
Various issues:
The function should be void debounceSwitch(volatile uint8_t* port, uint8_t pin). Pointers to hardware registers must always be volatile. It doesn't make sense to return anything.
Never use 1 signed int literals when bit-shifting. Should be 1u << n or your program will bug out when n is larger than 8.
Burning away 30ms several times over in a busy-delay is horrible practice. It will lock your CPU at 100% doing nothing meaningful, for an eternity.
There are many ways to debounce buttons. The simplest professional form is probably to have a periodic timer running with interrupt every 10ms (should be enough, if in doubt measure debounce spikes of your button with a scope). It will look something like the following pseudo code:
volatile bool button_pressed = false;
void timer_interrupt (void)
{
uint8_t button = port & mask;
button_pressed = button && prev;
prev = button;
}
This assuming that buttons use active high logic.
What I dislike on your implementation is the pure dependency on PORT/IO handling and the actual filter/debouncing logic. What are you doing then, when the switch input comes over a signal e.g. from CAN?
Also, it can be handled much easier, if you think in configurable/parameterizable filters. You implement the logic once, and then just create proper configs and pass separate state variables into the filter.
// Structure to keep state
typedef struct {
boolean state;
uint8 cnt;
} deb_state_t;
// Structure to configure the filters debounce values
typedef struct {
uint8 cnt[2]; // [0] = H->L transition, [1] = L->H transition
} deb_config_t;
boolean debounce(boolean in, deb_state_t *state, const deb_config_t *cfg)
{
if (state->state != in) {
state->cnt++;
if (state->cnt >= cfg->cnt[in]) {
state->state = in;
state->cnt = 0;
}
} else {
state->cnt = 0;
}
return state->state;
}
static const deb_config_t debcfg_pin = { {3,4} };
static const deb_config_t debcfg_can = { {2,1} };
int main(void)
{
boolean in1, in2, out1, out2;
deb_state_t debstate_pin = {0, 0};
deb_state_t debstate_can = {0, 0};
while(1) {
// read pin and convert to 0/1
in1 = READ_PORT(PORTx, PINxy); // however this is defined on this architecture
out1 = debounce(in1, &debstate_pin, &debcfg_pin);
// same handling, but input from CAN
in2 = READ_CAN(MSGx, SIGxy); // however this is defined on this architecture
out2 = debounce(in2, &debstate_can, &debcfg_can);
// out1 & out2 are now debounced
}
I need to convert a short value from the host byte order to little endian. If the target was big endian, I could use the htons() function, but alas - it's not.
I guess I could do:
swap(htons(val))
But this could potentially cause the bytes to be swapped twice, rendering the result correct but giving me a performance penalty which is not alright in my case.
Here is an article about endianness and how to determine it from IBM:
Writing endian-independent code in C: Don't let endianness "byte" you
It includes an example of how to determine endianness at run time ( which you would only need to do once )
const int i = 1;
#define is_bigendian() ( (*(char*)&i) == 0 )
int main(void) {
int val;
char *ptr;
ptr = (char*) &val;
val = 0x12345678;
if (is_bigendian()) {
printf(“%X.%X.%X.%X\n", u.c[0], u.c[1], u.c[2], u.c[3]);
} else {
printf(“%X.%X.%X.%X\n", u.c[3], u.c[2], u.c[1], u.c[0]);
}
exit(0);
}
The page also has a section on methods for reversing byte order:
short reverseShort (short s) {
unsigned char c1, c2;
if (is_bigendian()) {
return s;
} else {
c1 = s & 255;
c2 = (s >> 8) & 255;
return (c1 << 8) + c2;
}
}
;
short reverseShort (char *c) {
short s;
char *p = (char *)&s;
if (is_bigendian()) {
p[0] = c[0];
p[1] = c[1];
} else {
p[0] = c[1];
p[1] = c[0];
}
return s;
}
Then you should know your endianness and call htons() conditionally. Actually, not even htons, but just swap bytes conditionally. Compile-time, of course.
Something like the following:
unsigned short swaps( unsigned short val)
{
return ((val & 0xff) << 8) | ((val & 0xff00) >> 8);
}
/* host to little endian */
#define PLATFORM_IS_BIG_ENDIAN 1
#if PLATFORM_IS_LITTLE_ENDIAN
unsigned short htoles( unsigned short val)
{
/* no-op on a little endian platform */
return val;
}
#elif PLATFORM_IS_BIG_ENDIAN
unsigned short htoles( unsigned short val)
{
/* need to swap bytes on a big endian platform */
return swaps( val);
}
#else
unsigned short htoles( unsigned short val)
{
/* the platform hasn't been properly configured for the */
/* preprocessor to know if it's little or big endian */
/* use potentially less-performant, but always works option */
return swaps( htons(val));
}
#endif
If you have a system that's properly configured (such that the preprocessor knows whether the target id little or big endian) you get an 'optimized' version of htoles(). Otherwise you get the potentially non-optimized version that depends on htons(). In any case, you get something that works.
Nothing too tricky and more or less portable.
Of course, you can further improve the optimization possibilities by implementing this with inline or as macros as you see fit.
You might want to look at something like the "Portable Open Source Harness (POSH)" for an actual implementation that defines the endianness for various compilers. Note, getting to the library requires going though a pseudo-authentication page (though you don't need to register to give any personal details): http://hookatooka.com/poshlib/
This trick should would: at startup, use ntohs with a dummy value and then compare the resulting value to the original value. If both values are the same, then the machine uses big endian, otherwise it is little endian.
Then, use a ToLittleEndian method that either does nothing or invokes ntohs, depending on the result of the initial test.
(Edited with the information provided in comments)
My rule-of-thumb performance guess is that depends whether you are little-endian-ising a big block of data in one go, or just one value:
If just one value, then the function call overhead is probably going to swamp the overhead of unnecessary byte-swaps, and that's even if the compiler doesn't optimise away the unnecessary byte swaps. Then you're maybe going to write the value as the port number of a socket connection, and try to open or bind a socket, which takes an age compared with any sort of bit-manipulation. So just don't worry about it.
If a large block, then you might worry the compiler won't handle it. So do something like this:
if (!is_little_endian()) {
for (int i = 0; i < size; ++i) {
vals[i] = swap_short(vals[i]);
}
}
Or look into SIMD instructions on your architecture which can do it considerably faster.
Write is_little_endian() using whatever trick you like. I think the one Robert S. Barnes provides is sound, but since you usually know for a given target whether it's going to be big- or little-endian, maybe you should have a platform-specific header file, that defines it to be a macro evaluating either to 1 or 0.
As always, if you really care about performance, then look at the generated assembly to see whether pointless code has been removed or not, and time the various alternatives against each other to see what actually goes fastest.
Unfortunately, there's not really a cross-platform way to determine a system's byte order at compile-time with standard C. I suggest adding a #define to your config.h (or whatever else you or your build system uses for build configuration).
A unit test to check for the correct definition of LITTLE_ENDIAN or BIG_ENDIAN could look like this:
#include <assert.h>
#include <limits.h>
#include <stdint.h>
void check_bits_per_byte(void)
{ assert(CHAR_BIT == 8); }
void check_sizeof_uint32(void)
{ assert(sizeof (uint32_t) == 4); }
void check_byte_order(void)
{
static const union { unsigned char bytes[4]; uint32_t value; } byte_order =
{ { 1, 2, 3, 4 } };
static const uint32_t little_endian = 0x04030201ul;
static const uint32_t big_endian = 0x01020304ul;
#ifdef LITTLE_ENDIAN
assert(byte_order.value == little_endian);
#endif
#ifdef BIG_ENDIAN
assert(byte_order.value == big_endian);
#endif
#if !defined LITTLE_ENDIAN && !defined BIG_ENDIAN
assert(!"byte order unknown or unsupported");
#endif
}
int main(void)
{
check_bits_per_byte();
check_sizeof_uint32();
check_byte_order();
}
On many Linux systems, there is a <endian.h> or <sys/endian.h> with conversion functions. man page for ENDIAN(3)
I have written a C Macro to set/unset Bits in a uint32 variable. Here are the definitions of the macros:
extern uint32_t error_field, error_field2;
#define SET_ERROR_BIT(x) do{\
if(x < 0 || x >63){\
break;\
}\
if(((uint32_t)x)<32U){\
(error_field |= ((uint32_t)1U << ((uint32_t)x)));\
break;\
} else if(((uint32_t)x)<64U){\
(error_field2 |= ((uint32_t)1U<<(((uint32_t)x)-32U)));\
}\
}while(0)
#define RESET_ERROR_BIT(x) do{\
if(((uint32_t)x)<32U){\
(error_field &= ~((uint32_t)1U<<((uint32_t)x)));\
break;\
} else if(((uint32_t)x) < 64U){\
(error_field2 &= ~((uint32_t)1U<<(((uint32_t)x)-32U)));\
}\
} while(0)
I am passing a field of an enumeration, that looks like this:
enum error_bits {
error_chamber01_data = 0,
error_port21_data,
error_port22_data,
error_port23_data,
error_port24_data,
/*this goes on until 47*/
};
This warning is produced:
left shift count >= width of type [-Wshift-count-overflow]
I am calling the Macros like this:
USART2->CR1 |= USART_CR1_RXNEIE;
SET_ERROR_BIT(error_usart2);
/*error_usart2 is 47 in the enum*/
return -1;
I get this warning with every macro, even with those where the left shift count is < 31.
If I use the definition of the macro without the macro, it produces no warning. The behaviour is the same with a 64 bit variable. I am programming a STM32F7 with AC6 STM32 MCU GCC compiler.
I can't figure out why this happens. Can anyone help me?
Probably a problem with the compiler not being able to diagnose correctly, as stated by M Oehm. A workaround could be, instead of using the minus operation, use the remainder operation:
#define _SET_BIT(x, bit) (x) |= 1U<<((bit) % 32U)
#define SET_BIT(x, bit) _SET_BIT(x, (uint32_t)(bit))
#define _SET_ERROR_BIT(x) do{\
if((x)<32U){\
SET_BIT(error_field, x);\
} else if((x)<64U){\
SET_BIT(error_field2, x);\
}\
}while(0)
#define SET_ERROR_BIT(x) _SET_ERROR_BIT((uint32_t)(x))
This way the compiler is finally smart enough to know that the value of x will never exceed 32.
The call to the "_" macro is used in order to force x to always be an uint32_t, inconditionally of the macro call, avoiding the UB of a call with a negative value of x.
Tested in coliru
Problem:
In the macros, you distinguish two cases, which, on their own, are okay. The warning comes from the branch that isn't executed, where the shift is out of range. (Apparently these diagnostics are issued before the dead branch is eliminated.)
#M Oehm
Solution
Insure shifts are in range 0-31 in both paths regardless of the x value and type of x.
x & 31 is a stronger insurance than x%32 or x%32u. % can result in negative remainders when x < 0 and with a wide enough type.
#define SET_ERROR_BIT(x) do{\
if((x) < 0 || (x) >63){\
break;\
}\
if(((uint32_t)x)<32U){\
(error_field |= ((uint32_t)1U << ( (x)&31 )));\
break;\
} else if(((uint32_t)x)<64U){\
(error_field2 |= ((uint32_t)1U<<( (x)&31 )));\
}\
}while(0)
As a general rule: good to use () around each usage of x.
Seeing the thread I wanted to indicate a nice (and perhaps cleaner) way to set, reset and toggle the status of a bit in the case of the two unsigned integers as in thread. This code should be OT because uses x that shall be an unsigned int (or an int) and not a enum value.
I've written the line of code at the end of this answer.
The code receives as input a number of parameter couples. Each couple of parameter is a letter and a number. The letter may be:
S to set a bit
R to reset a bit
T to toggle a bit
The number has to be a bit value from 0 to 63. The macros in the code discard each number greater than 63 and nothing is modified into the variables. The negative values haven't been evalued because we suppose a bit value is an unsigned value.
For Example (if we name the program bitman):
Executing: bitman S 0 S 1 T 7 S 64 T 7 S 2 T 80 R 1 S 63 S 32 R 63 T 62
The output will be:
S 0 00000000-00000001
S 1 00000000-00000003
T 7 00000000-00000083
S 64 00000000-00000083
T 7 00000000-00000003
S 2 00000000-00000007
T 80 00000000-00000007
R 1 00000000-00000005
S 63 80000000-00000005
S 32 80000001-00000005
R 63 00000001-00000005
T 62 40000001-00000005
#include <unistd.h>
#include <stdio.h>
#include <stdint.h>
#include <string.h>
static uint32_t err1 = 0;
static uint32_t err2 = 0;
#define SET_ERROR_BIT(x) (\
((unsigned)(x)>63)?err1=err1:((x)<32)?\
(err1 |= (1U<<(x))):\
(err2 |= (1U<<((x)-32)))\
)
#define RESET_ERROR_BIT(x) (\
((unsigned)(x)>63)?err1=err1:((x)<32)?\
(err1 &= ~(1U<<(x))):\
(err2 &= ~(1U<<((x)-32)))\
)
#define TOGGLE_ERROR_BIT(x) (\
((unsigned)(x)>63)?err1=err1:((x)<32)?\
(err1 ^= (1U<<(x))):\
(err2 ^= (1U<<((x)-32)))\
)
int main(int argc, char *argv[])
{
int i;
unsigned int x;
for(i=1;i<argc;i+=2) {
x=strtoul(argv[i+1],NULL,0);
switch (argv[i][0]) {
case 'S':
SET_ERROR_BIT(x);
break;
case 'T':
TOGGLE_ERROR_BIT(x);
break;
case 'R':
RESET_ERROR_BIT(x);
break;
default:
break;
}
printf("%c %2d %08X-%08X\n",argv[i][0], x, err2, err1);
}
return 0;
}
The macros are splitted in more then one line, but they are each a one-line code.
The code main has no error control then if the parameters are not correctly specified the program might be undefined behaviour.
I am trying to build a macro that runs a code only once.
Very useful for example if you loop a code and want something inside to happen only once. The easy to use method:
static int checksum;
for( ; ; )
{
if(checksum == 0) { checksum == 1; // ... }
}
But it is a bit wasteful and confusing. So I have this macros that use checking bits instead of checking true/false state of a variable:
#define CHECKSUM(d) static d checksum_boolean
#define CHECKSUM_IF(x) if( ~(checksum_boolean >> x) & 1) \
{ \
checksum_boolean |= 1 << x;
#define CHECKSUM_END }1
The 1 at the end is to force the user to put semi-colon at the end. In my compiler this is allowed.
The problem is figuring out how to do this without having the user to specify x (n bit to be checked).
So he can use this:
CHECKSUM(char); // 7 run-once codes can be used
for( ; ; )
{
CHECKSUM_IF
// code..
CHECKSUM_END;
}
Ideas how can I achieve this?
I guess you're saying you want the macro to somehow automatically track which bit of your bitmask contains the flag for the current test. You could do it like this:
#define CHECKSUM(d) static d checksum_boolean; \
d checksum_mask
#define CHECKSUM_START do { checksum_mask = 1; } while (0)
#define CHECKSUM_IF do { \
if (!(checksum_boolean & checksum_mask)) { \
checksum_boolean |= checksum_mask;
#define CHECKSUM_END \
} \
checksum_mask <<= 1; \
} while (0)
#define CHECKSUM_RESET(i) do { checksum_boolean &= ~((uintmax_t) 1 << (i)); } while (0)
Which you might use like this:
CHECKSUM(char); // 7 run-once codes can be used
for( ; ; )
{
CHECKSUM_START;
CHECKSUM_IF
// code..
CHECKSUM_END;
CHECKSUM_IF
// other code..
CHECKSUM_END;
}
Note, however, that that has severe limitations:
The CHECKSUM_START macro and all the corresponding CHECKSUM_IF macros must all appear in the same scope
Control must always pass through CHECKSUM_START before any of the CHECKSUM_IF blocks
Control must always reach the CHECKSUM_IF blocks in the same order. It may only skip a CHECKSUM_IF block if it also skips all subsequent ones that use the same checksum bitmask.
Those constraints arise because the preprocessor cannot count.
To put it another way, barring macro redefinitions, a macro without any arguments always expands to exactly the same text. Therefore, if you don't use a macro argument to indicate which flag bit applies in each case then that needs to be tracked at run time.
I'm trying to use assembly in C code using C variables.
My code looks like this:
__asm { INT interruptValue };
Where 'interruptValue' is a variable I get from the user (e.g 15 or 15h).
When I try to compile I get:
Assembler error: 'Invalid instruction
operands'
I don't know what is the correct type for interruptValue . I tried long\int\short\char\char* but none of them worked.
The INT opcode does not allow to specify a variable (register or memory) as an argument. You have to use a constant expression like INT 13h
If your really want to call variable interrupts (and I cannot imagine any case for doing so), use something like a switch statement to decide which interrupt to use.
Something like this:
switch (interruptValue)
{
case 3:
__asm { INT 3 };
break;
case 4:
__asm { INT 4 };
break;
...
}
EDIT:
This is a simple dynamic aproach:
void call_interrupt_vector(unsigned char interruptValue)
{
//the dynamic code to call a specific interrupt vector
unsigned char* assembly = (unsigned char*)malloc(5 * sizeof(unsigned char));
assembly[0] = 0xCC; //INT 3
assembly[1] = 0x90; //NOP
assembly[2] = 0xC2; //RET
assembly[3] = 0x00;
assembly[4] = 0x00;
//if it is not the INT 3 (debug break)
//change the opcode accordingly
if (interruptValue != 3)
{
assembly[0] = 0xCD; //default INT opcode
assembly[1] = interruptValue; //second byte is actual interrupt vector
}
//call the "dynamic" code
__asm
{
call [assembly]
}
free(assembly);
}