Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
Closed 9 years ago.
Improve this question
I'm not very good at C language, but I have write a very simple code to a C8051F312 microcontroller.
My code doesn't working. Please help me what did I wrong.
#include C8051F310.h
#include stdio.h
sbit LED_16 = P1^7; // green LED: 1 = ON; 0 = OFF
void init(void)
{
// XBRN registers_init
XBR0 = 0x00;
XBR1 = 0x00; // Enable the crossbar
PCA0MD = 0X00;
// port_init
P0MDOUT = 0x00; // Output configuration for P0
P1MDOUT = 0x40; // Output configuration for P1
P2MDOUT = 0x00; // Output configuration for P2
P3MDOUT = 0x00; // Output configuration for P3
}
void main(void)
{
init();
while (1)
{
LED_16 = 1; // LED continuously illuminated
}
}
1.First of all you should use one of 2 following options for #include directive
#include "path-spec"
#include <path-spec>
, not #include path-spec, as you did
2.To configuire 7th bit of P1 general I/O port to work in push-pull mode you should set
P1MDOUT = 0x80;
, not
P1MDOUT = 0x40;
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I used MPLAB X IDE (a software for microcontrollers) to compile my code, but for some reason it keeps saying that there are at least two errors (specifically in the area that is bolded). I tried looking, but I'm still not sure why that is, so any help would be greatly appreciated.
#include <stdio.h>
#include <stdlib.h>
#include <xc.h>
#include <math.h>
#include <p18f4620.h>
#pragma config OSC = INTIO67
#pragma config WDT = OFF
#pragma config LVP = OFF
#pragma config BOREN = OFF
#define delay 5
// Prototype Area to place all the references to the routines used in the program
void Init_ADC(void);
unsigned char Get_Full_ADC(void);
void Flash_LED(unsigned char);
void main(void)
{
unsigned int ADC_Result; // local variable to store the result
Init_ADC(); // initialize the A2D converter
TRISB =0x00; // make PORTB as all outputs
while(1)
{
ADC_Result = Get_Full_ADC(); // call routine to measure the A2D port
Flash_LED(ADC_Result); // call routine to flash the LED based on the delay
// indicated by ADC_Result
}
}
void Init_ADC(void)
{
ADCON0=0x01; // select channel AN0, and turn on the A2D subsystem
ADCON1=0x0E; // set pin 2 as analog signal, VDD-VSS as reference voltage
// and right justify the result
ADCON2=0xA9; // Set the bit conversion time (TAD) and acquisition time
}
**unsigned int Get_Full_ADC(void)
{
int result;
ADCON0bits.GO=1; // Start Conversion
while(ADCON0bits.DONE==1); // Wait for conversion to be completed (DONE=0)
result = (ADRESH * 0x100) + ADRESL; // Combine result of upper byte and lower byte into
return result; // return the most significant 8- bits of the result.
}**
void Flash_LED(unsigned int ADC_result)
{
unsigned int counter1, counter2;
LATB = 0x0A; // output to PORTB the pattern 00001010
// delay loop
for (counter2=delay; counter2>0; --counter2)
{
for (counter1=ADC_result ; counter1>0; -- counter1);
}
LATB = 0x05 // output to PORTB the pattern 00000101
// delay loop
for (counter2=delay; counter2>0; --counter2)
{
for (counter1=ADC_result ; counter1>0; -- counter1);
}
}
The function prototype (declaration) says
unsigned char Get_Full_ADC(void);
but its definition says
unsigned int Get_Full_ADC(void)
and also you have
int result;
...
return result;
So you never use the consistent type. The compiler will complain about the non-matching definition.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I would like to use a dsPIC33EP32MC204 microcontroller for motor control application. However I am unable to make the High-Speed PWM work (no PWM pulses on the PWM2H pin), even though i was following the User Manual and the Family Reference Manual. Here is my code:
void Init_PWM(void)
{
PTCONbits.PTEN = 0; //PWM module is disabled
PTCONbits.PTSIDL = 0; // PWMx time base runs in CPU Idle mode
PWMCON2bits.FLTIEN = 0; // Fault interrupt is disabled and the FLTSTAT bit is cleared
PWMCON2bits.ITB = 0; // PTPER register provides timing for this PWM generator
PWMCON2bits.MDCS = 0; // PDCx register provides duty cycle information for this PWM generator
PWMCON2bits.MTBS = 0; // PWM generator uses the primary master time base for synchronization and as the clock source
PWMCON2bits.IUE = 0; // Updates to the active PDCx registers are synchronized to the PWMx period boundary
PTCON2bits.PCLKDIV = 0b000; // PWM Input Clock Prescaler
PWMCON2bits.CAM = 0; // Edge Aligned PWM
PWMCON2bits.DTC = 0b10; // Dead time function is disabled
IOCON2bits.PENH = 1; // PWMx module controls PWMxH pin
IOCON2bits.POLH = 0; // PWMxH pin is active-high
IOCON2bits.OVRENH = 0; // PWMx generator controls PWMxH pin
IOCON2bits.PMOD = 0b01; // PWM I/O pin pair is in Redundant Output mode
PHASE2 = 0; // No Phase Shift
PTPER = 0x4E20; // Period register
PTCONbits.PTEN = 1; //PWM module is enabled
}
Related code from the main() function:
TRISBbits.TRISB12 = 0;
Init_PWM();
PDC2 = 0x2710; //Duty Cycle
I would be grateful if someone could tell me why can't I see any PWM pulses after completing the attached configurations. What am I doing wrong? Thank you!
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I'm working on a project part of it to have some reads with ultrasonic sensor and send it with serial communication, i wrote code and it gives random reads and sometimes gives 0 as a read, is the formula i used for finding distance right !?, or there is another formula, I'm using Atmega32 with internal 8MHz clock, can someone help me and know what's wrong with my code !?.
#define F_CPU 8000000UL
#include <avr/io.h>
#include <util/delay.h>
#include <avr/interrupt.h>
void inti_serial();
static volatile int pulse = 0;
static volatile int change = 0;
int main(void)
{
/* Replace with your application code */
inti_serial();
MCUCR |= (1 << ISC00); //Any logical change on INT0
GICR |= (1 << INT0); //Enable INT0
TCCR1A=0;
sei();
while (1)
{
PORTC |= (1<<0);
_delay_us(15);
PORTC &= ~(1<<0);
while(!(UCSRA & (1<<UDRE)));
UDR = ((pulse/2)*1*(1/F_CPU)*343) ;
_delay_ms(100);
}
}
ISR(INT0_vect){
if (change==1)//when logic from HIGH to LOW
{
TCCR1B=0;//disabling counter
pulse=TCNT1;//count memory is updated to integer
TCNT1=0;//resetting the counter memory
change=0;
}
if (change==0)//when logic change from LOW to HIGH
{
TCCR1B|=(1<<CS10);//enabling counter
change=1;
}
}
void inti_serial()
{
UCSRB |= (1<<TXEN);
UCSRC |= (1<<UCSZ0) | (1<<UCSZ1) | (1<<URSEL);
UBRRL = 0x33;
}
I see a few options for improvement in your development:
You are writing a sample from the ISR to variable and you read it continuously from the main loop and output it. Instead you should only output a new sample once (makes the serial data much smaller and easier to concentrate on the actual content and sample timing)
Before you think about the correct formula you should verify that your sampling mechanism is right. Without details about your sensor, nobody here can judge your formula, anyway.
Instead of sampling a free running counter you could use the input capture circuit of the processor (more accurate, less jitter due to interrupt latency)
Instead of resetting the counter register to zero you could subtract two consecutive samples from each other (less sample jitter due to interrupt latency)
Instead of deducing the edge from a toggled flag, ask the hardware about the state of the pin or which edge triggered the interrupt (or the capture)
This question already has an answer here:
C8051f312 microcontroller [closed]
(1 answer)
Closed 9 years ago.
I have the 8051F312 microcontroller, and I have to turn on the led (on the 7.bit of the P2 port). My code is not working, maybe you have some ideas.
#include < C8051F310.H >
#include < stdio.h >
sbit LED_16 = P2^7; // P2^7-->green LED: 1 = ON; 0 = OFF
void init(void)
{
// XBRN registers_init
XBR1 = 0x40; // Enable the crossbar
PCA0MD &= 0X40; // Disable Watchdog
P2MDOUT |= 0xF0;
ADC0CN = 0x80;
ADC0CF = 0xFC;
REF0CN = 0x08;
}
void main(void)
{
init();
while (1)
{
LED_16 = 1; // LED continuously illuminated
}
}
(sorry for the format, but I had problem with the text editor)
First you need to set input/output to the GPIO. For 8051 micro controller family(According to my knowledge)(I don't know about 8051F312), assigning 1 to a pin sets that gpio as input and assigning 0 sets that gpio as output. So in your case first you need to set P2.7 as output. For that you need to do LED_16 = 0; in your init function. After that you need to consider how your LED is connected to your micro controller pin. If anode of LED connected to micro controller pin you need to make it HIGH to glow the LED. If cathode of LED connected to micro controller pin you need to make it LOW to glow the LED.
If anode of led connected to micro controller your code should be
void main(void)
{
init();
while (1)
{
LED_16 = 1; // LED continuously illuminated
}
}
If cathode of led connected to micro controller, then your code should be
void main(void)
{
init();
while (1)
{
LED_16 = 0; // LED continuously illuminated
}
}
Please find AN101 application note from Silicon Labs. I have compared example source code with your code and I have noticed that:
They use XBR2 = 0x40 for crossbar initialization.
They enable /SYSCLK by XBR1 = 0x80.
They configure output pin to push-pull mode by PRT1CF |= 0x40 (I think it should be PRT1CF |= 0x80 in your case).
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
can anyone please explain this code?
#include <avr/io.h>
#include <avr/interrupt.h>
#include <avr/signal.h>
char n = 0;
char FLAG =0x00;
char FLAG2 =0x00;
char RST=0x00;
unsigned char minutes_save [20];
unsigned char seconds_save [20];
int seconds, minutes, shift, count;
void init(void)
{
DDRB = 0xff;
DDRA =0xff;
MCUCR = 0x0F;
GICR = 0xC0;
TCCR2 = 0x05;
ASSR = 0x08;
TCNT2 = 0x00;
sei();
}
SIGNAL(SIG_INTERRUPT0)
{
if (FLAG == 0x00)
TIMSK = 0x40;
if (FLAG == 0x01)
TIMSK = 0x00;
FLAG = FLAG ^ 1;
}
Whenever the program receives an interrupt signal, it modifies the value of TIMSK to be 0x40 (64 in decimal) or 0x00 (0 in decimal) depending upon whether FLAG is currently set to be 0 or 1, and then it inverts the value of FLAG by performing a bitwise XOR operation with 1.
As for the rest of the code (the init() function, the other variables being declared, and the sei() function), there is not enough context provided by the code to determine what exactly it is doing/trying to do.
This page might be helpful: http://www.avr-asm-tutorial.net/avr_en/beginner/PDETAIL.html
It appears your code is setting register values on an ATMEL AVR embedded processor.