Reducing firebase latency by only sending changed value? - c

I am using an Wemos D1 Mini (Arduino) to send sensor data to Firebase. It is one value I'm sending. I found that this makes the program slow down, so the sensor isn't able to get the data as fast, as data is being sent (which is kind of obvious).
Anyhow, I want to send the value to Firebase only when this value changed its property. It is an int value, but I'm not sure how to go around this. Should I use a listener? This is a portion of my code:
int n = 0; // will be used to store the count
Firebase.setInt("Reps/Value", n); // sends value to fb
delay(100); // Wait 1 second and scan again
I was hoping that the sensor could scan every second, which it does. But at this rate (pun intended) the value is being pushed every second to FB. This slows down the scanning to every 3 seconds. How can I only use the firebaseSetInt method when n changes its value?

You can check after every reading every new value whether the value has been changed or not by adding just a simple conditional statement.
int n = 0; // will be used to store the count
int n_old; // old value saved to fb
if(n!=n_old) { //checks whether value is changed
Firebase.setInt("Reps/Value", n); // sends value to fb
n_old = n; // updates the old value to the last updated
}
delay(100); // Wait 1 second and scan again
Or if you want to go for a tolerance approach, you can further do something like this:
int n = 0; // will be used to store the count
int n_old; // old value saved to fb
int tolerance = 3; // tolerance upto 3%
if(abs((n-n_old)/((n+n_old)/2))*100 > tolerance) {
Firebase.setInt("Reps/Value", n); // sends value to fb
n_old = n; // updates the old value to the last updated
}
delay(100); // Wait 1 second and scan again

Coming from a professional use of remote databases, you should go for a gliding average approach. You do this by creating a circle buffer with lets say 30 sensor values and calculate an average value. As long as a value is +/- 3% within the average recorded at time0 you do not update. If the value is above or under you send to Firebase and set a new time0 average. Depending on your precision and needs you ease the stress on the systems.Imho only life safers like current breakers or flow cutting (liquids) have to be real time, all hobby applications like measuring wind speed, heating etc are well designed with 20 - 60 sec intervals.
The event listener by the way is this approach, just do something if its out of the norm. If you have a fixed target value as a reference its much easier to check for the +/- difference. If the pricing of FB changes it will be an issue for devs - so plan ahead.

Related

How can I implement a timer, using sdl2 libraries in C, making lunar lander game

I'm making a lander mole version (Atari 1979) in C.I need to implement a timer in my game and then print on the screen.
I'm using SDLDrawLine because i have a vector that represent's my charactes. I need my code to run a string of characters, that's good. I use the sprintf function to transform a number into a string of characters to print it on the screen.
char aux_str[MAX_VALUES];
sprintf(aux_str,"%d",*value);
for(j=0;j<strlen(aux_str);j++){
*tam_caracter_numero = letra_a_longitud(aux_str[j]);
*ptr_valor = letra_a_vector(aux_str[j]);
for(i=0;i<*tam_caracter_numero-1;i++){
SDL_RenderDrawLine(
renderer,
(*ptr_valor)[i][0] * escalado + pos_x,
-(*ptr_valor)[i][1] * escalado + pos_y,
(*ptr_valor)[i+1][0] * escalado + pos_x,
-(*ptr_valor)[i+1][1] * escalado + pos_y
);
}
This works properly, but I need my timer to start in 0000, changing to 0001,0002,.... But when i transform my number in a string using sprintf, the results is only 1, and doesn't print de 0s. There is some function o some way to make this posible? That let begin in 0000?
To print leading zeros just use the format "%04d" instead of plain "%d" in your sprintf-statement.
On account of timing the whole thing I'd recommend going through Lazyfoos's SDL tutorials about timers.
To get the time use SDL_GetTicks(). This will get the time in milliseconds since SDL was initialized. See the man page for more details on that.
To use it as a timer you have to get the delta time since last call to SDL_GetTicks and update it as such.
unsigned time = SDL_GetTicks();
while(game_is_running) { // Your game loop or whatever thread
// keeps track of time
unsigned now = SDL_GetTicks();
unsigned delta_time = now - time;
// Either delay next frame to get a stable FPS and/or use it as
// calculation in physics/collision
// Update the time
time = now;
}
SDL_GetTime is expensive so I would not recommend calling it more than once per loop.
Off-topic: I also wouldn't recommend allocating a string repeatedly in a loop. instead allocate once and overwrite every iteration.
edit: In my "off-topic"-comment I am referring to aux_str[MAX_VALUES];, even though it's 1 instruction it's still unnecessary if it's inside OP's gameloop.

Average from error prone measurement samples without buffering

I got a µC which measures temperature with of a sensor with an ADC. Due to various circumstances it can happen, that the reading is 0 (-30°C) or a impossible large Value (500-1500°C). I can't fix the reasons why these readings are so bad (time critical ISRs and sometimes a bad wiring) so I have to fix it with a clever piece of code.
I've come up with this (code gets called OVERSAMPLENR-times in a ISR):
#define OVERSAMPLENR 16 //read value 16 times
#define TEMP_VALID_CHANGE 0.15 //15% change in reading is possible
//float raw_tem_bed_value = <sum of all readings>;
//ADC = <AVR ADC reading macro>;
if(temp_count > 1) { //temp_count = amount of samples read, gets increased elsewhere
float avgRaw = raw_temp_bed_value / temp_count;
float diff = (avgRaw > ADC ? avgRaw - ADC : ADC - avgRaw) / (avgRaw == 0 ? 1 : avgRaw); //pulled out to shorten the line for SO
if (diff > TEMP_VALID_CHANGE * ((OVERSAMPLENR - temp_count) / OVERSAMPLENR)) //subsequent readings have a smaller tollerance
raw_temp_bed_value += avgRaw;
else
raw_temp_bed_value += ADC;
} else {
raw_temp_bed_value = ADC;
}
Where raw_temp_bed_value is a static global and gets read and processed later, when the ISR got fired 16 times.
As you can see, I check if the difference between the current average and the new reading is less then 15%. If so I accept the reading, if not, I reject it and add the current average instead.
But this breaks horribly if the first reading is something impossible.
One solution I though of is:
In the last line the raw_temp_bed_value is reset to the first ADC reading. It would be better to reset this to raw_temp_bed_value/OVERSAMPLENR. So I don't run in a "first reading error".
Do you have any better solutions? I though of some solutions featuring a moving average and use the average of the moving average but this would require additional arrays/RAM/cycles which we want to prevent.
I've often used something what I call rate of change to the sampling. Use a variable that represents how many samples it takes to reach a certain value, like 20. Then keep adding your sample difference to a variable divided by the rate of change. You can still use a threshold to filter out unlikely values.
float RateOfChange = 20;
float PreviousAdcValue = 0;
float filtered = FILTER_PRESET;
while(1)
{
//isr gets adc value here
filtered = filtered + ((AdcValue - PreviousAdcValue)/RateOfChange);
PreviousAdcValue = AdcValue;
sleep();
}
Please note that this isn't exactly like a low pass filter, it responds quicker and the last value added has the most significance. But it will not change much if a single value shoots out too much, depending on the rate of change.
You can also preset the filtered value to something sensible. This prevents wild startup behavior.
It takes up to RateOfChange samples to reach a stable value. You may want to make sure the filtered value isn't used before that by using a counter to count the number of samples taken for example. If the counter is lower than RateOfChange, skip processing temperature control.
For a more advanced (temperature) control routine, I highly recommend looking into PID control loops. These add a plethora of functionality to get a fast, stable response and keep something at a certain temperature efficiently and keep oscillations to a minimum. I've used the one used in the Marlin firmware in my own projects and works quite well.

Moving Average for ADC

Hi All i am working on a project where I have to calculate the moving average of ADC readings. The data coming out from ADC represent an Sinusoidal wave.
This is the code I am using to get moving average of a given signal.
longNew = (8 bit data from ADC);
longNew = longNew << 8;
//Division
longNew = longNew >> 8; //255 Samples
longTemp = avgALong >> 8;
avgALong -= longTemp;// Old data
avgALong += longNew;// New Data
avgA = avgALong >> 8;//256 Point Average
Reference Image
Please refer this image for upper limit and lower limit relative to reference (or avgA)
Currently I am using a constant value to obtain the upper limit and lower limit of voltage for my application
which I am calculating as follows
upper_limit = avgA + Delta(x);
lower_limit = avgA - Delta(x);
In my case I am taking Delta(x) = 15.
I want to calculate this constant expression or Delta(x) based on signal strength.
The maximum voltage level of signal is 255 or 5Volt.
The minimum voltage level of signal varies frequently because of that a constant value is not useful for my application which determines the lower and upper limit.
Please help
Thank you
Now with the description of what's going on, I think you want three running averages:
The input signal. Lightly average it to help tamp down noise.
upper_limit When you determine local maximums, push them into this average.
lower_limit When you determine local minimums, push them into this average.
Your delta would be (upper_limit-lower_limit)/8 (or 4, or whatever). Your hysteresis points would be upper_limit - delta and lower_limit + delta.
Every time you transition to '1', push the current local minimum into the lower_limit moving average and then begin searching for a new local maximum. When you transition to '0', push the local maximum into the upper_limit moving average and begin searching for a new local minimum.
There is a problem if your signal strength is wildly varying (you could get to a point where your signal suddenly drops into the hysteresis band and you never get any more transitions). You could solve this a few ways:
Count how much time you spend in the hysteresis band and reset everything if you spend too much time.
Or
for each sample in the hysteresis band, bring upper_limit and lower_limit slightly closer together. Eventually they'd collapse to the point where you start detecting transitions again.
Take this with a grain of salt. If you're doing this for a school project, it almost certainly wont match whatever scholarly method your professor is looking for.

What does sprintf do? (was: FPS Calculation in OpenGL)

For FPS calculation, I use some code I found on the web and it's working well. However, I don't really understand it. Here's the function I use:
void computeFPS()
{
numberOfFramesSinceLastComputation++;
currentTime = glutGet(GLUT_ELAPSED_TIME);
if(currentTime - timeSinceLastFPSComputation > 1000)
{
char fps[256];
sprintf(fps, "FPS: %.2f", numberOfFramesSinceLastFPSComputation * 1000.0 / (currentTime . timeSinceLastFPSComputation));
glutSetWindowTitle(fps);
timeSinceLastFPSComputation = currentTime;
numberOfFramesSinceLastComputation = 0;
}
}
My question is, how is the value that is calculated in the sprint call stored in the fps array, since I don't really assign it.
This is not a question about OpenGL, but the C standard library. Reading the reference documentation of s(n)printf helps:
man s(n)printf: http://linux.die.net/man/3/sprintf
In short snprintf takes a pointer to a user supplied buffer and a format string and fills the buffer according to the format string and the values given in the additional parameters.
Here's my suggestion: If you have to ask about things like that, don't tackle OpenGL yet. You need to be fluent in the use of pointers and buffers when it comes to supplying buffer object data and shader sources. If you plan on using C for this, get a book on C and thoroughly learn that first. And unlike C++ you can actually learn C to some good degree over the course of a few months.
This function is supposedly called at every redraw of your main loop (for every frame). So what it's doing is increasing a counter of frames and getting the current time this frame is being displayed. And once per second (1000ms), it's checking that counter and reseting it to 0. So when getting the counter value at each second, it's getting its value and displaying it as the title of the window.
/**
* This function has to be called at every frame redraw.
* It will update the window title once per second (or more) with the fps value.
*/
void computeFPS()
{
//increase the number of frames
numberOfFramesSinceLastComputation++;
//get the current time in order to check if it has been one second
currentTime = glutGet(GLUT_ELAPSED_TIME);
//the code in this if will be executed just once per second (1000ms)
if(currentTime - timeSinceLastFPSComputation > 1000)
{
//create a char string with the integer value of numberOfFramesSinceLastComputation and assign it to fps
char fps[256];
sprintf(fps, "FPS: %.2f", numberOfFramesSinceLastFPSComputation * 1000.0 / (currentTime . timeSinceLastFPSComputation));
//use fps to set the window title
glutSetWindowTitle(fps);
//saves the current time in order to know when the next second will occur
timeSinceLastFPSComputation = currentTime;
//resets the number of frames per second.
numberOfFramesSinceLastComputation = 0;
}
}

Is this a good implementation of a FPS independant game loop?

I currently have something close to the following implementation of a FPS independent game loop for physics based games. It works very well on just about every computer I have tested it on, keeping the game speed consistent when frame rate drops. However I am going to be porting to embedded devices which will likely struggle harder with video and I am wondering if it will still cut the mustard.
edits:
For this question assume that msecs() returns the time passed in milliseconds which the program has run. The implementation of msecs is different on different platforms. This loop is also run in different ways on different platforms.
#define MSECS_PER_STEP 20
int stepCount, stepSize; // these are not globals in the real source
void loop() {
int i,j;
int iterations =0;
static int accumulator; // the accumulator holds extra msecs
static int lastMsec;
int deltatime = msec() - lastMsec;
lastMsec = msec();
// deltatime should be the time since the last call to loop
if (deltatime != 0) {
// iterations determines the number of steps which are needed
iterations = deltatime/MSECS_PER_STEP;
// save any left over millisecs in the accumulator
accumulator += deltatime%MSECS_PER_STEP;
}
// when the accumulator has gained enough msecs for a step...
while (accumulator >= MSECS_PER_STEP) {
iterations++;
accumulator -= MSECS_PER_STEP;
}
handleInput(); // gathers user input from an event queue
for (j=0; j<iterations; j++) {
// here step count is a way of taking a more granular step
// without effecting the overall speed of the simulation (step size)
for (i=0; i<stepCount; i++) {
doStep(stepSize/(float) stepCount); // forwards the sim
}
}
}
I just have a few comments. The first is that you don't have enough comments. There are places where it's not clear what you are trying to do so it is difficult to say if there is a better way to do it, but I'll point those out as I come to them. First, though:
#define MSECS_PER_STEP 20
int stepCount, stepSize; // these are not globals in the real source
void loop() {
int i,j;
int iterations =0;
static int accumulator; // the accumulator holds extra msecs
static int lastMsec;
These are not initialized to anything. The probably turn up as 0, but you should have initialized them. Also, rather than declaring them as static you might want to consider putting them in a structure that you pass into loop by reference.
int deltatime = msec() - lastMsec;
Since lastMsec wasn't (initialized and is probably 0) this probably starts out as a big delta.
lastMsec = msec();
This line, just like the last line, calls msec. This is probably meant as "the current time", and these calls are close enough that the returned value is probably the same for both calls, which is probably also what you expected, but still, you call the function twice. You should change these lines to int now = msec(); int deltatime = now - lastMsec; lastMsec = now; to avoid calling this function twice. Current time getting functions often have much higher overhead than you think.
if (deltatime != 0) {
iterations = deltatime/MSECS_PER_STEP;
accumulator += deltatime%MSECS_PER_STEP;
}
You should have a comment here that says what this does, as well as a comment above
that says what the variables were meant to mean.
while (accumulator >= MSECS_PER_STEP) {
iterations++;
accumulator -= MSECS_PER_STEP;
}
This loop needs a comment. It also needs to not be there. It appears that it could have been replaced with iterations += accumulator/MSECS_PER_STEP; accumulator %= MSECS_PER_STEP;. The division and modulus should run in shorter and more consistent time than the loop on any machine that has hardware division (which many do).
handleInput(); // gathers user input from an event queue
for (j=0; j<iterations; j++) {
for (i=0; i<stepCount; i++) {
doStep(stepSize/(float) stepCount); // forwards the sim
}
}
Doing steps in a loop independent of input will have the effect of making the game unresponsive if it does execute slow and get behind. It appears, at least, that if the game gets behind all of the input will start to stack up and get executed together and all of the in-game time will pass in one chunk. This is a less than graceful way to fail.
Additionally, I can guess what the j loop (outer loop) means, but the inner loop I am less clear on. also, the value passed to the doStep function -- what does that mean.
}
This is the last curly brace. I think that it looks lonely.
I don't know what goes on as far as whatever calls your loop function, which may be out of your control, and that may dictate what this function does and how it looks, but if not I hope that you will reconsider the structure. I believe that a better way to do it would be to have a function that is called repeatedly but with only one event at the time (issued regularly at a relatively short period). These events can be either user input events or timer events. User input events just set things up to react upon the next timer event. (when you don't have any events to process you sleep)
You should always assume that each timer event is processed at the same period, even though there may be some drift here if the processing gets behind. The main oddity that you may notice here is that if the game gets behind on processing timer events and then catches up again the time within the game may appear to slow down (below real time), then speed up (to real time), and then slow back down (to real time).
Ways to deal with this include only allowing one timer event to be in the event queue at one time, which would result in time appearing to slow down (below real time) and then speed back up (to real time) with no super speed interval.
Another way to do this, which is functionally similar to what you have, would be to have the last step of processing each timer event be to queue up the next timer event (note that no one else should send timer events {except for the first one} if this is the way you choose to implement the game). This would mean doing away with the regular time intervals between timer events and also restrict the ability for the program to sleep, since at the very least every time the event queue were inspected there would be a timer event to process.

Resources