Using wireless tools for linux in a c program, and I did a network scan wanting to find the signal strength (dBm) of each of the detected networks.
I found the following in the wireless.h:
struct iw_quality
{
__u8 qual; /* link quality (%retries, SNR,
%missed beacons or better...) */
__u8 level; /* signal level (dBm) */
__u8 noise; /* noise level (dBm) */
__u8 updated; /* Flags to know if updated */
};
I print out the "level" in my C program like this:
printf("Transmit power: %lu ", result->stats.qual.level);
also tried %d, but got the same output.
The result I get is numbers that look something like 178, 190, 201, 189 etc...
Is there a chance that these numbers are in Watts? According to a watt->dBm converter, 178 watts is approx. 52.50dBm ?
What should I be getting instead? Because I thought dBm translates to -80dBm or something.
Do I need to convert those numbers? How do I get the right output?
Thanks!!
=======UPDATE=========
I noticed some weird behaviour. When I use the level property of wireless.h through my program, the "strongest" value I have recorded is around -50dBm, whereas with the same router, when I ran "iw wlan0 scan", I receive around -14dBm as the strongest signal.
Does anyone know what causes this difference??
It looks like you found the right answer. For the record, this is what the source (iwlib.c) says about the encoding.
/* People are very often confused by the 8 bit arithmetic happening
* here.
* All the values here are encoded in a 8 bit integer. 8 bit integers
* are either unsigned [0 ; 255], signed [-128 ; +127] or
* negative [-255 ; 0].
* Further, on 8 bits, 0x100 == 256 == 0.
*
* Relative/percent values are always encoded unsigned, between 0 and 255.
* Absolute/dBm values are always encoded between -192 and 63.
* (Note that up to version 28 of Wireless Tools, dBm used to be
* encoded always negative, between -256 and -1).
*
* How do we separate relative from absolute values ?
* The old way is to use the range to do that. As of WE-19, we have
* an explicit IW_QUAL_DBM flag in updated...
* The range allow to specify the real min/max of the value. As the
* range struct only specify one bound of the value, we assume that
* the other bound is 0 (zero).
* For relative values, range is [0 ; range->max].
* For absolute values, range is [range->max ; 63].
*
* Let's take two example :
* 1) value is 75%. qual->value = 75 ; range->max_qual.value = 100
* 2) value is -54dBm. noise floor of the radio is -104dBm.
* qual->value = -54 = 202 ; range->max_qual.value = -104 = 152
*
* Jean II
*/
level and noise fall under example 2, and can be decoded with a cast to a signed 8 bit value.
For future record, this was resolved from the comments, thanks to the relevant people.
I simply needed to cast the unsigned int to a signed one and it was solved.
Changed the print line to this:
printf("Transmit power: %d ", (int8_t) result->stats.qual.level);
Now the values that looked like 178, 200 turned to -80, -69 etc!
Related
Background Information
I am currently developing a programming API for the Commodore C64 using KickC [in Beta] to allow me to more easily develop small programs, applications and possibly some games; it occurred to me that I may need a way to check if my code is running on PAL or NTSC machines, and in the latter case, which NTSC machine, as the old NTSC C64 has one fewer scanline than the newer C64 version.
After asking about for some help, Robin Harbron sent me a code snippet which works including with a CMD SuperCPU attached (my eventual target machine). As he sent it as assembly, I had to use most of it as it was, but use the ASM directive in KickC, as follows:
/**
* This is the initialisation will
* determine the machine type
* by setting the getMachineType global
* as follows:
* 37 is PAL
* 5 is NTSC (old)
* 6 is NTSC (new)
* 0 (or any other value) is unknown
*
* For safety, the initial value of 0xc000
* is stored into the accumulator and
* pushed onto the stack; it is then
* restored after the getMachineType
* global is set
*
* #author Robin Harbron
*/
void setMachineType() {
asm {
lda $c000
pha
sei
__br1:
lda $d011
bmi __br1
__br2:
lda $d011
bpl __br2
__br3:
lda $d012
bit $d011
bpl __ex1
sta $c000
bmi __br3
__ex1:
cli
}
getMachineType = peek(0xc000);
asm {
pla
sta $c000
}
}
At the top of my script, I have the getMachineType declared globally like this:
unsigned byte getMachineType;
and at the moment, my peek() function works like this:
/**
* Returns a single byte from a specific
* memory location
*/
unsigned char peek(unsigned short location) {
unsigned char* memory = (char*)location;
unsigned char returnByte = *memory;
return returnByte;
}
So now I can determine the available number of scan lines on the host machine running my program, and therefore more easily create PAL and NTSC compatible executables.
KickC is available to download from CSDb.dk, and will build and assemble with Kick Assembler
The assembly code you shared is using the SCROLY register ($D011) and the RASTER register ($D012) of the VIC-II chip.
The high bit of $D011 is the most significant bit of the current raster scan line, while $D012 contains the lower 8 bits.
NTSC systems have 262 (or 261?) scan lines, PAL have 312.
The assembler routine waits for the instant the high bit is set , i.e. scan line 256.
IF the high bit is set it loops until it is not set:
__br1:
lda $d011
bmi __br1 // read $d011 again if high bit is set
Then it loops while the high bit is clear, exiting the loop as soon as it becomes set:
__br2:
lda $d011
bpl __br2 // read $ d011 again if high bit is clear
Then it falls through to load the lower 8 bits of the RASTER scanline from $d012
This keeps storing the lower 8 bits of the scanline value in $c000 while testing the high bit until it clears again. If it has cleared, don't store the lower 8 bits, instead exit the loop through __ex1
__br3:
lda $d012
bit $d011
bpl __ex1
sta $c000
bmi __br3
__ex1:
cli
The result should be The number of the highest scanline observed - 256. For NTSC with 262 scanlines this is where you get the return value of 6 (or 5 for that other NTSC model), it's zero-based so 5 would be the value for the 262 scanline NTSC version.
Zero based scanline 312 would be 311, minus 256 = 55, in hex $37 .. The comments there should really have specified that the 37 was hexadecimal.
You can find information about these registers in the excellent "Mapping the Commodore 64" book.
You could translate all of this assembler into C (with the exception of setting the interrupt disable bit and clearing it: sei, cli), just assign a char * pointer the value 0xD011, and another 0xD012 and do the same thing with C code.
char machineType = 0;
signed char * SCROLY = 0xd011;
char * RASTER = 0xd012;
asm {
sei
}
while (*SCROLY < 0) ; // spin until scaline wraps
while (*SCROLY > 0) ; // spin until high bit set again
while (true) {
char lines = *RASTER;
if (*SCROLY > 0)
break;
machineType = lines;
}
asm {
cli
}
I need to implement an RMS calculations of sine wave in MCU (microcontroller, resource constrained). MCU lacks FPU (floating point unit), so I would prefer to stay in integer realm. Captures are discrete via 10 bit ADC.
Looking for a solution, I've found this great solution here by Edgar Bonet: https://stackoverflow.com/a/28812301/8264292
Seems like it completely suits my needs. But I have some questions.
Input are mains 230 VAC, 50 Hz. It's transformed & offset by hardware means to become 0-1V (peak to peak) sine wave which I can capture with ADC getting 0-1023 readings. Hardware are calibrated so that 260 VRMS (i.e. about -368:+368 peak to peak) input becomes 0-1V peak output. How can I "restore" back original wave RMS value providing I want to stay in integer realm too? Units can vary, mV will do fine also.
My first guess was subtracting 512 from the input sample (DC offset) and later doing this "magic" shift as in Edgar Bonet answer. But I've realized it's wrong because DC offset aren't fixed. Instead it's biased to start from 0V. I.e. 130 VAC input would produce 0-500 mV peak to peak output (not 250-750 mV which would've worked so far).
With real RMS to subtract the DC offset I need to subtract squared sum of samples from the sum of squares. Like in this formula:
So I've ended up with following function:
#define INITIAL 512
#define SAMPLES 1024
#define MAX_V 368UL // Maximum input peak in V ( 260*sqrt(2) )
/* K is defined based on equation, where 64 = 2^6,
* i.e. 6 bits to add to 10-bit ADC to make it 16-bit
* and double it for whole range in -peak to +peak
*/
#define K (MAX_V*64*2)
uint16_t rms_filter(uint16_t sample)
{
static int16_t rms = INITIAL;
static uint32_t sum_squares = 1UL * SAMPLES * INITIAL * INITIAL;
static uint32_t sum = 1UL * SAMPLES * INITIAL;
sum_squares -= sum_squares / SAMPLES;
sum_squares += (uint32_t) sample * sample;
sum -= sum / SAMPLES;
sum += sample;
if (rms == 0) rms = 1; /* do not divide by zero */
rms = (rms + (((sum_squares / SAMPLES) - (sum/SAMPLES)*(sum/SAMPLES)) / rms)) / 2;
return rms;
}
...
// Somewhere in a loop
getSample(&sample);
rms = rms_filter(sample);
...
// After getting at least N samples (SAMPLES * X?)
uint16_t vrms = (uint32_t)(rms*K) >> 16;
printf("Converted Vrms = %d V\r\n", vrms);
Does it looks fine? Or am I doing something wrong like this?
How does SAMPLES (window size?) number relates to F (50Hz) and my ADC capture rate (samples per second)? I.e. how much real samples do I need to feed to rms_filter() before I can get real RMS value providing my capture speed are X sps? At least how to evaluate required minimum N of samples?
I did not test your code, but it looks to me like it should work fine.
Personally, I would not have implemented the function this way. I would
instead have removed the DC part of the signal before trying to
compute the RMS value. The DC part can be estimated by sending the raw
signal through a low pass filter. In pseudo-code this would be
rms = sqrt(low_pass(square(x - low_pass(x))))
whereas what you wrote is basically
rms = sqrt(low_pass(square(x)) - square(low_pass(x)))
It shouldn't really make much of a difference though. The first formula,
however, spares you a multiplication. Also, by removing the DC component
before computing the square, you end up multiplying smaller numbers,
which may help in allocating bits for the fixed-point implementation.
In any case, I recommend you test the filter on your computer with
synthetic data before committing it to the MCU.
How does SAMPLES (window size?) number relates to F (50Hz) and my ADC
capture rate (samples per second)?
The constant SAMPLES controls the cut-off frequency of the low pass
filters. This cut-off should be small enough to almost completely remove
the 50 Hz part of the signal. On the other hand, if the mains
supply is not completely stable, the quantity you are measuring will
slowly vary with time, and you may want your cut-off to be high enough
to capture those variations.
The transfer function of these single-pole low-pass filters is
H(z) = z / (SAMPLES * z + 1 − SAMPLES)
where
z = exp(i 2 π f / f₀),
i is the imaginary unit,
f is the signal frequency and
f₀ is the sampling frequency
If f₀ ≫ f (which is desirable for a good sampling), you can approximate
this by the analog filter:
H(s) = 1/(1 + SAMPLES * s / f₀)
where s = i2πf and the cut-off frequency is f₀/(2π*SAMPLES). The gain
at f = 50 Hz is then
1/sqrt(1 + (2π * SAMPLES * f/f₀)²)
The relevant parameter here is (SAMPLES * f/f₀), which is the number of
periods of the 50 Hz signal that fit inside your sampling window.
If you fit one period, you are letting about 15% of the signal through
the filter. Half as much if you fit two periods, etc.
You could get perfect rejection of the 50 Hz signal if you design a
filter with a notch at that particular frequency. If you don't want
to dig into digital filter design theory, the simplest such filter may
be a simple moving average that averages over a period of exactly
20 ms. This has a non trivial cost in RAM though, as you have to
keep a full 20 ms worth of samples in a circular buffer.
I'm having some trouble with my C code. I have an ADC which will be used to determine whether to shut down (trip zone) the PWM which I'm using. But my calculation seem to not work as intended, because the ADC shuts down the PWM at the wrong voltage levels. I initiate my variables as:
float32 current = 5;
Uint16 shutdown = 0;
and then I calculate as:
// Save the ADC input to variable
adc_info->adc_result0 = AdcRegs.ADCRESULT0>>4; //bit shift 4 steps because adcresult0 is effectively a 12-bit register not 16-bit, ADCRESULT0 defined as Uint16
current = -3.462*((adc_info->adc_result0/1365) - 2.8);
// Evaluate if too high or too low
if(current > 9 || current < 1)
{
shutdown = 1;
}
else
{
shutdown = 0;
}
after which I use this if statement:
if(shutdown == 1)
{
EALLOW; // EALLOW protected register
EPwm1Regs.TZFRC.bit.OST = 1; // Force a one-shot trip-zone interrupt
EDIS; // end write to EALLOW protected register
}
So I want to trip the PWM if current is above 9 or below 1 which should coincide with an adc result of <273 (0x111) and >3428 (0xD64) respectively. The ADC values correspond to the voltages 0.2V and 2.51V respectively. The ADC measure with a 12-bit accuracy between the voltages 0 and 3V.
However, this is not the case. Instead, the trip zone goes off at approximately 1V and 2.97V. So what am I doing wrong?
adc_info->adc_result0/1365
Did you do integer division here while assuming float?
Try this fix:
adc_info->adc_result0/1365.0
Also, the #pmg's suggestion is good. Why spending cycles on calculating the voltage, when you can compare the ADC value immediately against the known bounds?
if (adc_info->adc_result0 < 273 || adc_info->adc_result0 > 3428)
{
shutdown = 1;
}
else
{
shutdown = 0;
}
If you don't want to hardcode the calculated bounds (which is totally understandable), then define them as calculated from values which you'd want to hardcode literally:
#define VOLTAGE_MIN 0.2
#define VOLTAGE_MAX 2.51
#define AREF 3.0
#define ADC_PER_VOLT (4096 / AREF)
#define ADC_MIN (VOLTAGE_MIN * ADC_PER_VOLT) /* 273 */
#define ADC_MAX (VOLTAGE_MAX * ADC_PER_VOLT) /* 3427 */
/* ... */
shutdown = (adcresult < ADC_MIN || adcresult > ADC_MAX) ? 1 : 0;
/* ... */
When you've made sure to grasp the integer division rules in C, consider a little addition to your code style: to always write constant coefficients and divisors with a decimal (to make sure they get a floating point type), e.g. 10.0 instead of 10 — unless you specifically mean integer division with truncation. Sometimes it may also be a good idea to specify float literal precision with appropriate suffix.
Im trying to make a simple RPM meter using an ATMega328.
I have an encoder on the motor which has 306 interrupts per rotation (as the motor encoder has 3 spokes which interrupt on rising and falling edge, the motor is geared 51:1 and so 6 transitions * 51 = 306 interrupts per wheel rotation ) , and I am using a timer interrupting every 1ms, however in the interrupt it set to recalculate every 1 second.
There seems to be 2 problems.
1) RPM never goes below 60, instead its either 0 or RPM >= 60
2) Reducing the time interval causes it to always be 0 (as far as I can tell)
Here is the code
int main(void){
while(1){
int temprpm = leftRPM;
printf("Revs: %d \n",temprpm);
_delay_ms(50);
};
return 0;
}
ISR (INT0_vect){
ticksM1++;
}
ISR(TIMER0_COMPA_vect){
counter++;
if(counter == 1000){
int tempticks = ticksM1;
leftRPM = ((tempticks - lastM1)/306)*1*60;
lastM1 = tempticks;
counter = 0;
}
}
Anything that is not declared in that code is declared globally and as an int, ticksM1 is also volatile.
The macros are AVR macros for the interrupts.
The purpose of the multiplying by 1 for leftRPM represents time, ideally I want to use 1ms without the if statement so the 1 would then be 1000
For a speed between 60 and 120 RPM the result of ((tempticks - lastM1)/306) will be 1 and below 60 RPM it will be zero. Your output will always be a multiple of 60
The first improvement I would suggest is not to perform expensive arithmetic in the ISR. It is unnecessary - store the speed in raw counts-per-second, and convert to RPM only for display.
Second, perform the multiply before the divide to avoid unnecessarily discarding information. Then for example at 60RPM (306CPS) you have (306 * 60) / 306 == 60. Even as low as 1RPM you get (6 * 60) / 306 == 1. In fact it gives you a potential resolution of approximately 0.2RPM as opposed to 60RPM! To allow the parameters to be easily maintained; I recommend using symbolic constants rather than magic numbers.
#define ENCODER_COUNTS_PER_REV 306
#define MILLISEC_PER_SAMPLE 1000
#define SAMPLES_PER_MINUTE ((60 * 1000) / MILLISEC_PER_SAMPLE)
ISR(TIMER0_COMPA_vect){
counter++;
if(counter == MILLISEC_PER_SAMPLE)
{
int tempticks = ticksM1;
leftCPS = tempticks - lastM1 ;
lastM1 = tempticks;
counter = 0;
}
}
Then in main():
int temprpm = (leftCPS * SAMPLES_PER_MINUTE) / ENCODER_COUNTS_PER_REV ;
If you want better that 1RPM resolution you might consider
int temprpm_x10 = (leftCPS * SAMPLES_PER_MINUTE) / (ENCODER_COUNTS_PER_REV / 10) ;
then displaying:
printf( "%d.%d", temprpm / 10, temprpm % 10 ) ;
Given the potential resolution of 0.2 rpm by this method, higher resolution display is unnecessary, though you could use a moving-average to improve resolution at the expense of some "display-lag".
Alternatively now that the calculation of RPM is no longer in the ISR you might afford a floating point operation:
float temprpm = ((float)leftCPS * (float)SAMPLES_PER_MINUTE ) / (float)ENCODER_COUNTS_PER_REV ;
printf( "%f", temprpm ) ;
Another potential issue is that ticksM1++ and tempticks = ticksM1, and the reading of leftRPM (or leftCPS in my solution) are not atomic operations, and can result in an incorrect value being read if interrupt nesting is supported (and even if it is not in the case of the access from outside the interrupt context). If the maximum rate will be less that 256 cps (42RPM) then you might get away with an atomic 8 bit counter; you cal alternatively reduce your sampling period to ensure the count is always less that 256. Failing that the simplest solution is to disable interrupts while reading or updating non-atomic variables shared across interrupt and thread contexts.
It's integer division. You would probably get better results with something like this:
leftRPM = ((tempticks - lastM1)/6);
I'm wiring a program that tests a set of wires for open or short circuits. The program, which runs on an AVR, drives a test vector (a walking '1') onto the wires and receives the result back. It compares this resultant vector with the expected data which is already stored on an SD Card or external EEPROM.
Here's an example, assume we have a set of 8 wires all of which are straight through i.e. they have no junctions. So if we drive 0b00000010 we should receive 0b00000010.
Suppose we receive 0b11000010. This implies there is a short circuit between wire 7,8 and wire 2. I can detect which bits I'm interested in by 0b00000010 ^ 0b11000010 = 0b11000000. This tells me clearly wire 7 and 8 are at fault but how do I find the position of these '1's efficiently in an large bit-array. It's easy to do this for just 8 wires using bit masks but the system I'm developing must handle up to 300 wires (bits). Before I started using macros like the following and testing each bit in an array of 300*300-bits I wanted to ask here if there was a more elegant solution.
#define BITMASK(b) (1 << ((b) % 8))
#define BITSLOT(b) ((b / 8))
#define BITSET(a, b) ((a)[BITSLOT(b)] |= BITMASK(b))
#define BITCLEAR(a,b) ((a)[BITSLOT(b)] &= ~BITMASK(b))
#define BITTEST(a,b) ((a)[BITSLOT(b)] & BITMASK(b))
#define BITNSLOTS(nb) ((nb + 8 - 1) / 8)
Just to further show how to detect an open circuit. Expected data: 0b00000010, received data: 0b00000000 (the wire isn't pulled high). 0b00000010 ^ 0b00000000 = 0b0b00000010 - wire 2 is open.
NOTE: I know testing 300 wires is not something the tiny RAM inside an AVR Mega 1281 can handle, that is why I'll split this into groups i.e. test 50 wires, compare, display result and then move forward.
Many architectures provide specific instructions for locating the first set bit in a word, or for counting the number of set bits. Compilers usually provide intrinsics for these operations, so that you don't have to write inline assembly. GCC, for example, provides __builtin_ffs, __builtin_ctz, __builtin_popcount, etc., each of which should map to the appropriate instruction on the target architecture, exploiting bit-level parallelism.
If the target architecture doesn't support these, an efficient software implementation is emitted by the compiler. The naive approach of testing the vector bit by bit in software is not very efficient.
If your compiler doesn't implement these, you can still code your own implementation using a de Bruijn sequence.
How often do you expect faults? If you don't expect them that often, then it seems pointless to optimize the "fault exists" case -- the only part that will really matter for speed is the "no fault" case.
To optimize the no-fault case, simply XOR the actual result with the expected result and a input ^ expected == 0 test to see if any bits are set.
You can use a similar strategy to optimize the "few faults" case, if you further expect the number of faults to typically be small when they do exist -- mask the input ^ expected value to get just the first 8 bits, just the second 8 bits, and so on, and compare each of those results to zero. Then, you just need to search for the set bits within the ones that are not equal to zero, which should narrow the search space to something that can be done pretty quickly.
You can use a lookup table. For example log-base-2 lookup table of 255 bytes can be used to find the most-significant 1-bit in a byte:
uint8_t bit1 = log2[bit_mask];
where log2 is defined as follows:
uint8_t const log2[] = {
0, /* not used log2[0] */
0, /* log2[0x01] */
1, 1 /* log2[0x02], log2[0x03] */
2, 2, 2, 2, /* log2[0x04],..,log2[0x07] */
3, 3, 3, 3, 3, 3, 3, 3, /* log2[0x08],..,log2[0x0F */
...
}
On most processors a lookup table like this will go to ROM. But AVR is a Harvard machine and to place data in code space (ROM) requires special non-standard extension, which depends on the compiler. For example the IAR AVR compiler would need use the extended keyword __flash. In WinAVR (GNU AVR) you would need to use the PROGMEM attribute, but it's more complex than that, because you would also need to use special macros to to read from the program space.
I think there is only one way to do this:
Create an array out "outdata". Each item of the array can for example correspond an 8-bit port register.
Send the outdata on the wires.
Read back this data as "indata".
Store the indata in an array mapped exactly as the outdata.
In a loop, XOR each byte of outdata with each byte of indata.
I would strongly recommend inline functions instead of those macros.
Why can't your MCU handle 300 wires?
300/8 = 37.5 bytes. Rounded to 38. It needs to be stored twice, outdata and indata, 38*2 = 76 bytes.
You can't spare 76 bytes of RAM?
I think you're missing the forest through the trees. Seems like a bed of nails test. First test some assumptions:
1) You know which pins should be live for each pin tested/energized.
2) you have a netlist translated for step 1 into a file on sd
If you operate on a byte level as well as bit, it simplifies the issue. If you energize a pin, there is an expected pattern out stored in your file. First find the mismatched bytes; identify mismatched pins in the byte; finally store the energized pin with the faulty pin numbers.
You don't need an array for searching, or results. general idea:
numwires=300;
numbytes=numwires/8 + (numwires%8)?1:0;
for(unsigned char currbyte=0; currbyte<numbytes; currbyte++)
{
unsigned char testbyte=inchar(baseaddr+currbyte)
unsigned char goodbyte=getgoodbyte(testpin,currbyte/*byte offset*/);
if( testbyte ^ goodbyte){
// have a mismatch report the pins
for(j=0, mask=0x01; mask<0x80;mask<<=1, j++){
if( (mask & testbyte) != (mask & goodbyte)) // for clarity
logbadpin(testpin, currbyte*8+j/*pin/wirevalue*/, mask & testbyte /*bad value*/);
}
}