Simple code confusion about define directive parameter - c

I'm trying to learn C to program this small routine on a Texas Instruments MSP430. Could you help me understand the ((unsigned char *) 0x0023) part?
I'm having issues understanding this middle portion of this Define directive. I've tried looking this up but found nothing on the ((unsigned char *) 0x0023) portion. This looks like a type cast but its not casting anything.
My major concern is the 0x0023 (decimal 35). Is this just a unsigned char pointer with 35 bits?
Code:
#define P1IFG_ptr ((unsigned char *) 0x0023) unsigned char result;
Any help is really appreciated and thank you in advance.

((unsigned char *) 0x0023)
Is a pointer to address 0x23
I think there's a missing newline in your code sample...
On the MSP430 this is the port P1 interrupt flag register:
Each PxIFGx bit is the interrupt flag for its corresponding I/O pin
and is set when the selected input signal edge occurs at the pin. All
PxIFGx interrupt flags request an interrupt when their corresponding
PxIE bit and the GIE bit are set. Each PxIFG flag must be reset with
software. Software can also set each PxIFG flag, providing a way to
generate a software initiated interrupt. Bit = 0: No interrupt is
pending Bit = 1: An interrupt is pending Only transitions, not static
levels, cause interrupts. If any PxIFGx flag becomes set during a Px
interrupt service routine, or is set after the RETI instruction of a
Px interrupt service routine is executed, the set PxIFGx flag
generates another interrupt. This ensures that each transition is
acknowledged.
You can read from this register, e.g.:
unsigned char result;
result = *P1IFG_ptr;
Or write to it, e.g.:
*P1IFG_ptr = 1;

Related

How to properly set up DMA on a dsPIC33F

I have code written by someone else (for a dsPIC33FJ128MC706A) - which initializes UART1 & PWMs and uses a busy wait in the U1Rx interrupt to get 14 bytes, and timer1 interrupts to update PWM registers if new data arrives (debugged - it enters the _U1RXInterrupt when data is sent)
I tried modifying it to use DMA instead of busy wait. I changed nothing in the initialization, but called the following function after initialization of UART:
unsigned char received_data1 [0x0f] __attribute__((space(dma)));
unsigned char received_data2 [0x0f] __attribute__((space(dma)));
void dma_init(void)
{
DMA0REQ = 0x000b; // IRQSEL = 0b0001011 (UART1RX))
DMA0PAD = (volatile unsigned int) &U1RXREG;
DMA0STA = __builtin_dmaoffset(received_data1);
DMA0STB = __builtin_dmaoffset(received_data2);
DMA0CNT = 0xe;//15 bytes in
DMA0CON = 0x0002; //continuous ping-pong + post-increment
IFS0bits.DMA0IF = 0;
IEC0bits.DMA0IE = 1;
DMA0CONbits.CHEN = 1;
}
This is based on example 22-10 in the FRM with a few slight changes (UART1 receiver instead of UART2). I can't seem to find other examples that don't use library functions (which I didn't even know existed up until I came across those examples).
(Also, to be consistent with the code I got, I put in _ISR after void instead of making interrupt an attribute, but that didn't change anything)
In the example, there was no interrupt for the UART1 receiver - but according to the FRM, it should be enabled. Made sense to me that I don't need to clear the interrupt flag myself (it would kind-of defeat the purpose of DMA if I needed to use the CPU for it).
And yet, when I send data, the breakpoint on DMA0Interrupt doesn't trigger, but the one on the default interrupt does, and I have PWMIF=1 and U1RXIF=1. I also tried commenting out the default ISR and adding a _U1ErrInterrupt even though examining the flags I didn't notice any error flags being raised, just the normal U1RXIF.
I don't understand why this would happen. I haven't changed the PWM code at all, and the PWM flag isn't raised in the original code when I place a breakpoint in the _U1RxInterrupt. I don't even understand how a DMA set-up could cause a PWM error.
The U1RXIF flag seems to be telling me the DMA didn't handle the incoming byte (I assume it's the DMA that clears the flag), which once again relates to the question "what did I do wrong in this DMA setup"
UART initialization code:
in a general initialization function (called by main())
TRISFbits.TRISF2 = 1; /* U1RX */
TRISFbits.TRISF3 = 0; /* U2TX */
also called by main:
void UART1_init_USB (void)
{
/* Fcy 7.3728 MHZ */
U1MODE = 0x8400; /* U1ATX Transmit status and control register */
U1STA = 0x2040; /* Receve status and control register Transmit disabled*/
U1BRG = 0x000f; /* Baund rate control register 115200 bps */
IPC2bits.U1RXIP = 3; /* Receiver Interrupt Priority bits */
IEC0bits.U1RXIE = 1; /* Enable RS232 interrupts */
IFS0bits.U1RXIF = 0;
}
(comments by original code author)

C program stops execution when assigning variable

I'm not sure if this is happening when assigning a variable specifically but when debugging the assembly code, the compiler executes RJMP $+0000 where it hangs the program.
EDIT: I added included libraries if that's relevant
#define __DELAY_BACKWARD_COMPATIBLE__
#define F_CPU 8000000UL
#include <avr/io.h>
#include <avr/delay.h>
#include <stdint.h>
void ReadTemp(uint8_t address){
ADCSRA = ADCSRA | 0b10000111; //enable ADC, CLK/128 conversion speed
ADMUX = ADMUX | 0b01000000; //Use internal 2.56V Vref and PA0 as input, right-hand justified
ADCSRA |= (1 << ADSC); //start conversion
while(!(ADCSRA & (1 << ADIF))) {} // wait until process is finished;
uint8_t low_value = ADC & 0x00FF;
// or low_value = ADCL;
uint8_t high_value = ADC & 0xFF00; //problem here
...
}
I don't know what any of this is doing, but I do see an error in the bitwise math.
uint8_t low_value = ADC & 0x00FF;
uint8_t high_value = ADC & 0xFF00; //problem here
low_value and high_value are both 8 bits (uint8_t). I am going to go out on a limb here and say ADC is 16 bit. For high_value, you are anding ADC with 0xFF00 then truncating the value to 8 bit. high_value will always be zero.
What should be done is:
uint8_t high_value = (ADC & 0xFF00) >> 8;
This will grab the left byte of ADC and shift it right by 8 bits then assign it to the high_value byte storage giving you the correct value.
How you are doing low_value is correct. As a matter of fact, you could simply do:
uint8_t low_value = ADC;
There is something somewhat suboptimal here:
uint8_t low_value = ADC & 0x00FF;
You are reading ADC, which is a 16-bit register, implemented as a pair
of 8-bit registers. As shown in the disassembly, this requires two in
instructions, one per byte. And then you are just throwing away one of
those bytes. You may think that the compiler is smart enough to avoid
reading the byte it is going to discard right away. Unfortunately, it
cannot do that, as I/O registers are declared as volatile. The
compiler is forced to access the register as many times as the source
code does.
If you just want the low byte, you should read only that byte:
uint8_t low_value = ADCL;
You then wrote:
uint8_t high_value = ADC & 0xFF00;
As explained in the previous answer, high_value will be zero. Yet the
compiler will have to read the two bytes again, because the I/O register
is volatile. If you want to read the high byte, read ADCH.
But why would you want to read those two bytes one by one? Is it to put
them back together into a 16-bit variable? If this is the case, then
there is no need to read them separately. Instead, just read the 16-bit
register in the most straight-forward fashion:
uint16_t value = ADC;
A long time ago, gcc didn't know how to handle 16-bit registers, and
people had to resort to reading the bytes one by one, and then gluing
them together. You may still find very old example code on the Internet
that does that. Today, there is absolutely no reason to continue
programming this way.
Then you wrote:
//problem here
Nope, the problem is not here. That is not what generated the rjmp
instruction. The problem is probably right after that line, in the code
you have chosen not to post. You have some bug that manifests itself
only when optimizations are turned on. This is typical of code that
produces undefined behavior: works as expected with optimizations off,
then does weird “unexplainable” things when you enable optimizations.

STM8S Cosmic compiler defines

I am new to the Cosmic compiler and STM8.
In iostm8s103.h the Switch control register is defined as
volatile char CLK_SWCR #0x50c5;
How do I address the switch busy bit (SWBSY bit 0) in that register?
I need to wait until the switch busy bit is clear. (While bit)
It is strange that I cannot find an example for the Cosmic compiler.
With the micro-controller, using volatile registers is simple.
#include <iostm8s103.h>
CLK_SWCR |= SWBSY; // to set the bit 0
CLK_SWCR &= ~SWBSY; // to reset the bit 0
In the iostm8s103.h, the register is defined as follow:
volatile char CLK_SWCR #0x50c5; /* Switch Control reg */
So, to wait while BUSY, with that iostm8s103.h, the code looks like:
while ((CLK_SWCR | SWBSY) != 0);
If using the stm8s.h instead of iostm8s103.h, the CLK registers
are defined as struct.
So, to wait while BUSY, with that stm8s.h, the code looks like:
while ((CLK->SWCR | CLK_SWCR_SWBSY) != 0);
or
while (CLK->SWCR & CLK_SWCR_SWBSY);

PIC pass SFR address to function in C

I am attempting to pass a reference to an I/O pin as an function argument on a PIC24F MCU using C. For PICs, the device header file provides access to the i/o buffer registers via:
LATAbits.LATA2 = 0; // sets the pin (RA2 in this case) low.
if (PORTAbits.RA3) { // reads the state of the pin. (RA3)
I want to do something like this:
int main() {
Configure(); // Sets up peripherals, etc.
WaitForHigh(PORTAbits.RA3); // waits for pin RA3 to go hi.
...
return 0;
}
void WaitForHigh( ?datatype? pin_reference ) {
while( !pin_reference ); // Stays here until the pin goes hi.
}
So what datatype am I trying to pass here? And what's actually going on when I poll that pin? Below, I copy a relevant portion from the PIC24F device header that I'm using in case it helps.
#define PORTA PORTA
extern volatile unsigned int PORTA __attribute__((__sfr__));
typedef struct tagPORTABITS {
unsigned RA0:1;
unsigned RA1:1;
unsigned RA2:1;
unsigned RA3:1;
unsigned RA4:1;
unsigned RA5:1;
} PORTABITS;
extern volatile PORTABITS PORTAbits __attribute__((__sfr__));
Thank you in advance!
As an alternative to using a macro, a function can accept both the PORT register address (or latch register address, eg. LATA in the case of a pin configured for output) and the mask of the bit in the register that is needed. For example:
#include<p24FV32KA301.h> // defines both PORTA and _PORTA_RA3_MASK
void WaitForHigh( volatile unsigned int * port, pin_mask ) {
while( !(*port & pin_mask) ); // Stays here until the pin goes hi.
}
int main()
{
...
WaitForHigh( &PORTA, _PORTA_RA3_MASK ); // waits for pin RA3 to go hi.
...
return 0;
}
Please, note that the PORT bit values are obtained through a bit field, so, answering your question, you can't. Bit fields doesn't have address, so you cannot pass it as a pointer to a function.
Instead, you could use a Macro:
#define WaitForHigh(p) do{while(!(p));}while(0)
It is true that macros has it's draw backs on code readability, yet, given that proper care is taken, there are situations where they're the best solution. It is arguable if macro is the best solution in this Q&A, yet it is important to mention.
Thanks to the commenters for the suggestions to improve the macro safeness.
You can combine preprocessor processing with a function to get what you wan along with compile time checking of the symbols. For example:
#define PORT_FUNC(f, p, b) f(p, b)
#define WaitForHigh(p, b) PORT_FUNC(WaitForHighImp, &p, _ ##p## _ ##b## _MASK)
void WaitForHighImp(volatile unsigned* p, unsigned char mask)
{
while (!(*p & m))
;
}
int main()
{
WaitForHigh(PORTA, RA3);
}
The advantage of this approach is that you online say "PORTA" once and "RA3" once at the time of the call, you make sure the bit name is present in the port and that the bit is present.

How does this flush USART c code work?

So I have old code I am looking at, that I am supposed to update for a new micro controller. In the old code there is a function to flush the USART in case there is junk on it from the start up. The code is bellow:
#define RXC 7
#define RX_COMPLETE (1<<RXC)
void UART1_FLUSH(void){
unsigned char dummy;
while ( UCSR1A & RX_COMPLETE ) dummy = UDR1;
}
Now from what I understand the while loop will keep going as long as there is something to read from the USART from register UDR1 that is why it is being stored in dummy since we do not need it. Now what I need help being explained to me is why the while loop works the way that it does?
Looking for UCSRnA in http://upcommons.upc.edu/pfc/bitstream/2099.1/10997/4/Annex3.pdf that code simply waits until bit 7 ("RXCn: USART Receive Complete") in USCR1A is off.
That document says about bit 7 This flag bit is set when there are unread data in the receive buffer and cleared when the receive buffer is empty.
(1<<RXC) is the numerical value of bit 7. A bitwise AND (the &) between it and the value read from UCSR1A results in 0 (if the bit is off) or (1<<RXC) (if the bit is on). Since (1<<7) is 128 and that is not zero, the loop will be entered when the bit is set.

Resources