I have a problem with my code;
My aim is to create a code that should take 8000 - 10000 value per second from ADC.
Now i wrote a test code to configure the ADC, but according to my calculation ADC Time for Conversion shoul be :
* Tad=Tcy*(adcs+1)=(1/40)*4 = 1.6us
* Tc = 14 * 1.6 = 22.4 us
, but in MPLAB when i use StopWatch i see that 1 conversion take nearly 5 ms.
Configuration :
DSPIC33FJ128MC802
FRC + PLL -> ~ 80 MHz
FCY 40 MHz
Function for ADC Configuration
void adc_init(){
AD1CON1bits.ADON=0;
AD1CON1bits.AD12B = 1;
AD1CON1bits.FORM = 0;
AD1CON1bits.SSRC = 0; //
AD1CON1bits.ASAM = 0;
AD1CON2bits.CSCNA = 1;
AD1CON2bits.CHPS = 0;
AD1CON2bits.SMPI = 0;
AD1CON3bits.ADRC = 0;
AD1CON3bits.ADCS = 63;
AD1CSSLbits.CSS3 = 1;
AD1PCFGL=0xFFFF;
AD1PCFGLbits.PCFG3 = 0;
IPC3bits.AD1IP = 6;
AD1CON1bits.ADON = 1; }
Function for Read ADC Value
unsigned int read_adc_value(void){
AD1CON1bits.SAMP = 1; // start
//while(!AD1CON1bits.SAMP);
__delay_us(10);
AD1CON1bits.SAMP = 0; // stop
while(!AD1CON1bits.DONE);
return ADCBUF0;}
Related
Good morning,
I am using the ADC1 of a dsPIC33EP512GM604 and getting incorrect converted values.
To check this I made a cycle of 10 consecutive sampling/conversions.
The first value is always quite different from the rest of them, but it is the nearest to the "right" value.
Here is the relevant code:
/* Setup ADC1 for measuring R */
ANSELBbits.ANSB3 = 1; //ensure AN5 is analog
TRISBbits.TRISB3 = 1; //ensure AN5 is input
AD1CON1 = 0;
AD1CON1bits.ADSIDL = 1;
AD1CON1bits.AD12B = 1;
AD1CON1bits.SSRC = 7;
AD1CON2 = 0;
AD1CON2bits.VCFG = 0b001;
AD1CON2bits.SMPI = 0;
AD1CON3=0;
AD1CON3bits.SAMC = 0b11111;
AD1CON3bits.ADCS = 0;
AD1CON4 = 0; // no dma
AD1CHS0bits.CH0NA = 0;
AD1CHS0bits.CH0SA = 5;
IFS0bits.AD1IF = 0; // Clear the A/D interrupt flag bit
IEC0bits.AD1IE = 0; // Do Not Enable A/D interrupt
/* Read voltage value */
AD1CON1bits.ADON = 1; // Enable the A/D converter
__delay_us(25);
for (N=0; N<10; N++) {
AD1CON1bits.SAMP = 1;
__delay_us(5); // Wait for sampling time (min 3 TAD)
AD1CON1bits.SAMP = 0; // Start the conversion
while (!AD1CON1bits.DONE); // wait for conversion to finish
res[N] = (double) ADC1BUF0;
/* --- just for test ---*/
sprintf(deb,"ADC1BUF0 = %.0f\r\n", res[N]);
WriteStringUART1(deb);
/* ---- end of test ----*/
And here the results, for a certain fixed input voltage corresponding to a value of 215:
ADC1BUF0 = 222
ADC1BUF0 = 301
ADC1BUF0 = 296
ADC1BUF0 = 295
ADC1BUF0 = 295
ADC1BUF0 = 296
ADC1BUF0 = 296
ADC1BUF0 = 296
ADC1BUF0 = 296
ADC1BUF0 = 295
The first value 222 is acceptable close to the expected 215, to my purposes, the other values not.
What am I doing wrong?
I've used dsPIC33FJ64MC802 and was able to use the ADC.
I don't have much idea why the readings are that way. The code below worked for me. However, I can't say for sure that it will work properly for you.
void initADC() {
AD1CON1 = 0;
AD1CON1bits.AD12B = 1;
AD1CON2 = 0;
AD1CON3 = 0;
AD1CON3bits.ADCS = 2;
AD1CHS0 = 0;
AD1CON1bits.ADON = 1;
delayMs(1);
}
int readADC(char pin, unsigned samplingCycles) {
AD1CHS0bits.CH0SA = pin;
AD1CON1bits.SAMP = 1;
__delay32(samplingCycles);
AD1CON1bits.SAMP = 0;
while(!AD1CON1bits.DONE);
return ADC1BUF0;
}
Thanks to everybody for the contributions.
I finally got the trick. I then post the answer to my own question in case it may help someone.
Here is the trick:
The dsPIC can use different VrefH for the ADC1, ie. internal Vdd or external on PIN.
I used a HW that takes 2.5V external Vref on a dsPIC pin to be used as VrefH, and set the ADC accordingly.
The problem is that the dsPIC specs state that VrefH external should be greater than 2.7 V. So 2.5 was not sufficient to make it work well. That's foolishly it!
Here is ADC code example which I made before and connection diagram for AN2 with dsPIC33EV.
#define FCY 3685000UL // __delay_ms() and __delay_us() depend on FCY.
#include <xc.h> // p33EV256GM102.h
#include <libpic30.h>
void pins_init()
{
// ANSELx has a default value of 0xFFFF
// 1 = Pin is configured as an analog input
// 0 = Pin is configured as a digital I/O pin
ANSELA = ANSELB = 0x0000; // In most cases, it is better to initially set them to 0.
}
unsigned short adc1_raw = 0;
void adc1_init()
{
// See 70621c.pdf, Example 16-3: Code Sequence to Set Up ADC Inputs
ANSELBbits.ANSB0 = 1; // pin 4 - AN2/RPI32/RB0. 1 = Pin is configured as an analog input
AD1CHS0bits.CH0SA = 2; // Channel 0 positive input is AN2. See AD1CON2bits.ALTS
AD1CHS0bits.CH0NA = 0; // 0 = Channel 0 negative input is VREFL
AD1CON1bits.FORM = 0; // 00 = unsigned integer VREFL:0..VREFH:1023
AD1CON1bits.ASAM = 0; // 0 = Sampling begins when SAMP bit is set
AD1CON1bits.AD12B = 0; // 0 = 10-bit, 4-channel ADC operation
AD1CON2bits.VCFG = 0; // AVdd for VREFH and AVss for VREFL
AD1CON2bits.CHPS = 0; // 00 = Converts CH0. Select 1-channel mode.
AD1CON2bits.CSCNA = 0; // 0 = Does not scan inputs
AD1CON2bits.ALTS = 0; // 0 = Always uses channel input selects for Sample MUX A
AD1CON3bits.ADRC = 0; // 0 = clocked from the instruction cycle clock (TCY)
AD1CON3bits.ADCS = 15; // Conversion Clock Select bits. Tad=Tcy*(ADCS+1)=(1/Fcy)*(ADCS+1)
// Tad = (1/3685000)*(15+1) = 4342 ns
// (10-bit) Tconv = 12 * Tad = 52104 ns = 52.1 us.
IFS0bits.AD1IF = 0; // Clear the A/D interrupt flag bit
IEC0bits.AD1IE = 0; // Do Not Enable A/D interrupt
AD1CON1bits.ADON = 1; // turn on ADC
__delay_us(20); // ADC stabilization delay.
}
unsigned short adc1_read()
{
AD1CON1bits.SAMP = 1; // start sampling
__delay_us(100); // Wait for (enough) sampling time
AD1CON1bits.SAMP = 0; // start converting. See AD1CON1bits.ASAM
while (!AD1CON1bits.DONE)
{
; // Wait for the conversion to complete.
// In bare metal programming, most infinite loop issues
// are handled by watchdog reset.
}
return ADC1BUF0;
}
int main(void)
{
pins_init();
adc1_init();
while (1)
{
ClrWdt();
adc1_raw = adc1_read();
}
return 0;
}
I am using Debian (8.3.0-6) on an embedded custom board and working on the dht11 sensor.
Briefly,
I need to read 40 bits from a GPIO pin and each bit can take a max 70 microseconds. When bit-level is high for max 28us or 70us, it means is logic 0 or 1, respectively. (So I have a timeout controller for each bit and if a bit takes more than 80us, I need to stop the process.).
In my situation, sometimes I can read all 40 bits correctly but sometimes I can't do it and the function of libgpiod gpiod_line_get_value(line); is missing the bit (my code is below). I am trying to figure out that why I cant read and lose a bit, what is the reason for that. But I haven't found a sensible answer yet. So I was wondering What am I missing out?, What is the proper way to GPIO programming?
Here is what I wanted to show you, How do I understand what I am missing a bit? Whenever I catch a bit, I am setting and resetting another GPIO pin at the rising and falling edge of a bit (So I can see which bit missing). Moreover, as far as I see I am always missing the two edges on one bit or one edge on two bits consecutively (rising and falling or falling and rising). In the first picture, you can see which bit I missed, the second is when I read all bits correctly.
Here it is my code:
//********************************************************* Start reading data bit by low level (50us) ***************************
for (int i = 0; i < DHT_DATA_BYTE_COUNT ; i++) //DHT_DATA_BYTE_COUNT = 5
{
for (int J = 7; J > -1; J--)
{
GPIO_SetOutPutPin(testPin); //gpiod_line_set_value(testPin, 1);
int ret;
start = micros();
do
{
ret = GPIO_IsInputPinSet(dht11pin);//gpiod_line_get_value(dht11pin);
delta = micros() - start;
if(ret == -1)
{
err_step.step = 9;
err_step.ret_val = -1;
return -1;
}
if(delta > DHT_START_BIT_TIMEOUT_US) //80us
{
err_step.step = 10;
err_step.ret_val = -2;
err_step.timestamp[is] = delta;
err_step.indx[is].i = i;
err_step.indx[is++].j = J;
GPIO_ResetOutPutPin(testPin);
return -2;
}
}while(ret == 0);
GPIO_ResetOutPutPin(testPin);
err_step.ret_val = 10;
GPIO_SetOutPutPin(testPin);
start = micros();
do
{
ret = GPIO_IsInputPinSet(dht11pin);
delta = micros() - start;
if(ret == -1)
{
err_step.step = 11;
err_step.ret_val = -1;
return -1;
}
if(delta > DHT_BEGIN_RESPONSE_TIMEOUT_US) //80us
{
err_step.step = 12;
err_step.ret_val = -2;
err_step.timestamp[is] = delta;
err_step.indx[is].i = i;
err_step.indx[is++].j = J;
return -2;
}
}while(ret == 1);
err_step.timestamp[is] = delta;
err_step.indx[is].i = i;
err_step.indx[is++].j = J;
GPIO_ResetOutPutPin(testPin);
err_step.ret_val = 10;
(delta > DHT_BIT_SET_DATA_DETECT_TIME_US) ? bitWrite(dht11_byte[i],J,1) : bitWrite(dht11_byte[i],J,0);
}
}
I have a problem measuring a signal from ADC using I2S.
// Initialization of I2S buffer
uint16_t* i2sReadBuffer = (uint16_t*)calloc(DMA_BUFFER_LEN, sizeof(uint16_t));
size_t bytesRead;
while (1) {
// Read data in the buffer till its not full
i2s_read(ADC_I2S_NUM, (void*)i2sReadBuffer, DMA_BUFFER_LEN * sizeof(uint16_t), &bytesRead, portMAX_DELAY);
for (size_t i = 0; i < DMA_BUFFER_LEN; i++) {
uint16_t value = i2sReadBuffer[i];
printf("%d\n", value);
}
}
I have successfully gotten a signal and filtered it, but the filter parameters are set, so that the sampling frequency is around 1kHz.
ECG signal before and after filtering
It means that the actual sampling frequency is 1kHz, but not 10 kHz isn't it?
If not, why filtering works only with such parameters?
Matlab processing code:
xlsFile = 'ecg_data.xlsx';
[num, txt, raw] = xlsread(xlsFile);
input = cell2mat(raw);
a = fir1(100, [0.06 0.14], 'stop');
in = input(:, 2);
filtered = filter(a, 1, in);
y = fft(filtered);
N = length(y); % Length of a vector
f = (0:N - 1 ) * 100 / N; % Frequency vector
m = abs(y); % Magnitude
% y(m<1e-6) = 0;
p = unwrap(angle(y)); % Phase
X_vector = (1:3:N) / ((N - 1) * 1e-3);
title('Frequency spectrum')
subplot(2,2,1)
plot(f,m)
title('Magnitude')
ax = gca;
ax.XTick = X_vector;
subplot(2,2,2)
plot(f,p*180/pi)
title('Phase')
ax = gca;
ax.XTick = X_vector;
subplot(2,2,3)
plot(in)
title('Input singal')
xlabel('msec')
ax = gca;
subplot(2,2,4)
plot(filtered)
title('Filtered signal')
ax = gca;
I2S ADC configuration is below
#define SAMPLING_FREQ 10000
#define ADC_CHANNEL ADC1_CHANNEL_4
#define ADC_UNIT ADC_UNIT_1
#define DMA_BUFFER_LEN 1024
#define ADC_I2S_NUM I2S_NUM_0
static void init_i2s_adc(void) {
i2s_config_t i2s_config =
{
.mode = (i2s_mode_t)(I2S_MODE_MASTER | I2S_MODE_RX | I2S_MODE_ADC_BUILT_IN),
.sample_rate = SAMPLING_FREQ,
.bits_per_sample = I2S_BITS_PER_SAMPLE_16BIT,
.channel_format = I2S_CHANNEL_FMT_RIGHT_LEFT,
.communication_format = I2S_COMM_FORMAT_STAND_MSB,
.intr_alloc_flags = ESP_INTR_FLAG_LEVEL1,
.dma_buf_count = 8,
.dma_buf_len = DMA_BUFFER_LEN,
.tx_desc_auto_clear = 1,
.use_apll = 0,
};
adc1_config_channel_atten(ADC_CHANNEL, ADC_ATTEN_11db);
adc1_config_width(ADC_WIDTH_12Bit);
i2s_driver_install(ADC_I2S_NUM, &i2s_config, 0, NULL);
i2s_set_clk(ADC_I2S_NUM, SAMPLING_FREQ, I2S_BITS_PER_SAMPLE_16BIT, I2S_CHANNEL_MONO);
i2s_set_adc_mode(ADC_UNIT, ADC_CHANNEL);
i2s_adc_enable(ADC_I2S_NUM);
}
I have build a led cube using transistors, shift registers and an arduino nano (and it kinda works). I know shift registers may be a poor design choice but I have to work with what I got so please don't get stuck on that in your answers.
There is this piece of code:
bool input[32];
void resetLeds()
{
input[R1] = 0;
input[G1] = 0;
input[B1] = 0;
input[R2] = 0;
input[G2] = 0;
input[B2] = 0;
input[R3] = 0;
input[G3] = 0;
input[B3] = 0;
input[R4] = 0;
input[G4] = 0;
input[B4] = 0;
input[X1Z1] = 1;
input[X1Z2] = 1;
input[X1Z3] = 1;
input[X1Z4] = 1;
input[X2Z1] = 1;
input[X2Z2] = 1;
input[X2Z3] = 1;
input[X2Z4] = 1;
input[X3Z1] = 1;
input[X3Z2] = 1;
input[X3Z3] = 1;
input[X3Z4] = 1;
input[X4Z1] = 1;
input[X4Z2] = 1;
input[X4Z3] = 1;
input[X4Z4] = 1;
}
void loop()
{
T = micros();
for(int I = 0; I < 100; I++)
{
counter++;
if(counter >= 256 / DIVIDER) counter = 0;
for(int i = 0; i < 64; i += 4)
{
x = i / 16;
z = (i % 16) / 4;
resetLeds();
input[XZ[x][z]] = 0;
for(y = 0; y < 4; y++)
{
index = i + y;
if(counter < xyz[index][0]) input[Y[y][RED]] = 1;
if(counter < xyz[index][1]) input[Y[y][GREEN]] = 1;
if(counter < xyz[index][2]) input[Y[y][BLUE]] = 1;
}
PORTB = 0;
for(int j = 0; j < 32; j++)
{
bitWrite(OUT_PORT, 4, 0);
bitWrite(OUT_PORT, 3, input[j]);
PORTB = OUT_PORT;
bitWrite(OUT_PORT, 4, 1);
PORTB = OUT_PORT;
}
bitWrite(OUT_PORT, 0, 1);
PORTB = OUT_PORT;
}
}
T = micros() - T;
Serial.println(T / 100);
}
The runtime of a single iteration is reported to be 1274 microseconds, but I need it to be even lower to build a pwm function of sorts (manually turning a transistor on and off through a shift register). While optimizing I found this strange behavior I cannot explain. There is this line in the code:
bitWrite(OUT_PORT, 3, input[j]);
When I remove this line or change input[j] to 0 the runtime is halved. Apparently, an array lookup takes about 20 microseconds. But I find it very weird since I am indexing this array in more places in the code (when writing) and there it takes 23 microseconds for 28 writes.
Can somebody please explain to me what is going on and/or how to make this piece of code run faster? I guess you can do writes in a pipelined manner but a read stalls the code completely since you cannot continue before you receive the value from cache. But then again, I hardly doubt a read from cache should take 23 whole microseconds.
[EDIT 27/10/2020 16:55]
The part of the code that writes to the shiftregisters has a lot of bitWrites which are time consuming. Instead of writing the bits every time I implemented preconfigured options for the byte to write to PORTB:
PORTB = LATCH_LOW;
for(int j = 0; j < 32; j++)
{
if(input[j] == 0)
{
PORTB = CLOCK_OFF_DATA_0;
PORTB = CLOCK_ON_DATA_0;
}
else
{
PORTB = CLOCK_OFF_DATA_1;
PORTB = CLOCK_ON_DATA_1;
}
}
PORTB = LATCH_HIGH;
This cuts my running time roughly in half, which is kind of fast enough but I wonder I could get it to run even faster. When I remove everything from the loop except for the writing to the shift registers and I remove the input[j] read, I get a runtime of 200 microseconds. This means that if I could remove dependence on the input[j] read and compute its value inline I should be able to get at least another 2 times speed up. To achieve 8 bit PWM I calculated I need the running time to be 40 microseconds or less so I am going to stick with 16 (instead of 256) brightness levels for now to prevent flicker.
[EDIT 28/10/2020 19:38]
I went into the platform.txt and change the optimization flag to -Ofast. This got my iteration time down another 200 microseconds!
I am trying to read the ADC value from the potentiometer in PIC24F Curiosity Development Board (PIC24FJ128GA204) then turn on the LED if the value more 1000 (I configured it as a 10-bit).
However, The maximum value that place in the buffer is around 500. The following code shows the problem.
Please advise.
#include <xc.h>
#define Pot_TriState _TRISC0
#define Pot_AnalogState _ANSC0
int main(void) {
Pot_TriState = 1;
Pot_AnalogState = 1;
_TRISC5 = 0;
_LATC5 = 0;
ADC_Config();
while (1) {
if (ADC1BUF10 >= 1000) {
_LATC5 = 0; //Never be executed
}
if (ADC1BUF10 >= 300) {
_LATC5 = 1;
}
}
return 0;
}
void ADC_Config() {
AD1CON1bits.ADON = 0; // ADC must be off when changing configuration
//start conversion automatically after sampling and configure ADC to either 10 or 12 bits
AD1CON1bits.SSRC = 7;
AD1CON1bits.MODE12 = 0;
AD1CON2bits.PVCFG = 0; //A/D Converter Positive Voltage Reference Configuration bits
AD1CON2bits.NVCFG0 = 0; // A/D Converter Negative Voltage Reference Configuration bit
AD1CHSbits.CH0SB = 0b01010; //01010 = AN10
AD1CHSbits.CH0SA = 0b01010; //added
AD1CON3bits.ADRC = 1; // 1 = RC clock --- ADC?s internal RC clock
AD1CON3bits.SAMC = 0b11111; // set auto sampling time -- Auto-Sample Time Select bits11111 = 31 TAD
AD1CON2bits.BUFREGEN = 1; //Conversion result is loaded into the buffer location determined by the converted channel
AD1CON1bits.ASAM = 1;
AD1CON1bits.ADON = 1;
}