How does a mobile phone determines the percentage of battery left? Is it related to quantum physics?
I think that it run some kind of test to determine its efficiency at the time and on the basis of results, it determines the battery left. Please also send some coding for this.
How does a mobile phone determines the percentage of battery left?
It measures the voltage supplied by the battery. Since the battery's voltage drop usually isn't linear, the device likely does some processing to map the measured voltage onto a linear "percent lifetime remaining" scale, with 0% corresponding to the voltage at which the device can no longer operate.
Similarly the charging circuit monitors the battery voltage during charging and either stops charging or switches to a trickle charging process when the battery is "full," meaning that the voltage has stopped increasing over time.
Related
glances provides a "top-like" display with a list of sensors and what temperature those sensors are reporting. One in particular is named "edge". Can someone explain where or what this sensor is?
I ran some benchmarking software on my older amd gpu (rx590) and my cpu fan starts spinning very fast but cpu Composite temperature is in the 40C range. cpu usage is minimal.
This sensor marked as "edge" shows a temperature around 75C.
Thanks in advance
Apparently the 'edge' sensor is the Video Card temperature.
I don't have any documentation to reference but whenever the gpu is very busy the temperature increases.
We are using Flink 1.9.1.
We have a source, a process function, and a sink. The application consumes and produces to kinesis.
The input rate (produced by a simulator) is 20 events per second. The per second output rate for the process function shows 14 per second. The back pressure metrics for the source is shown as OK (green). The event count (Number of events sent by the source) and the number of events received by the process function also match with very little delay.
But this count does not match the event count pushed by the simulator. This count matches the 14 per second rate.
Now my question is, does Flink regulate the input rate automatically?
In my case, how is the input rate controlled at 14 per second?
If it is not, is there any other metric that I should be looking at that I'm missing?
It's not possible to force a Flink pipeline to consume events at a particular rate. By design, there is limited buffering in the network stack, and the slowest task in the execution graph will dictate the rate at which the pipeline will consume and process events.
The back pressure monitoring (that green OK signal) is not a definitive guide to whether back pressure is occuring. So long as the job is able to make steady forward progress, it probably won't indicate that there's a problem. You could examine some of the network queue metrics to get more insight: e.g., inPoolUsage, outPoolUsage, inputQueueLength. See Flink Network Stack Vol. 2: Monitoring, Metrics, and that Backpressure Thing for a lot more on this topic.
20 events per second seems very slow, so I am a bit surprised that something can't keep up with that rate, but that appears to be what's happening.
I need to create a load test for a certain number of requests in a given time. I could successfully setup Precise Throughput Timer and I believe I understand how it works. What I don't understand is how other timers, specifically Gaussian Random Timer would affect it.
I have run my test plan with and without Gaussian Random Timer but I don't see that much of difference in the results. I'm wondering whether adding Gaussian Random Timer would help me to better simulate my users behavior?
I would say that these timers are mutually exclusive
Precise Throughput Timer allows you to reach and maintain the desired throughput (number of requests per given amount of time)
Gaussian Random Timer - allows you to simulate "think time"
If your goal is to mimic real users behavior as close as possible - go for the Gaussian Random Timer because real users don't hammer the application under test non-stop, they need some time to "think" between operations, i.e. locate the button and move the mouse pointer there, read something, type something, etc. So if your test assumes simulating real users using real browsers - go for Gaussian Random Timer and put realistic think times between operations. If you need your test to produce certain amount of hits per second - just increase the number of threads (virtual users) accordingly. Check out What is the Relationship Between Users and Hits Per Second? for comprehensive explanation if needed.
On the other hand Precise Thorughput Timer is handy when there are no "real users", for example you're testing an API or a database or a message queue and need to send a specific number of requests per second.
I want to get the information about the energy in the node, so those neighbouring nodes can reroute the data packets when the neighbouring node energy is less.
Currently UnetStack simulator doesn't provide energy measurements directly. However, it's not hard to do yourself for simulations. See this discussion for some suggestions:
https://unetstack.net/support/viewtopic.php?id=81:
The current version of UnetStack does not have any energy model per se. But the trace and logs provide you all the information you'll need (transmit/receive counts, simulation time) to compute the energy consumption. Specifically, you'd want to assign some energy per packet transmission, some energy per packet reception, and some power consumption while idling. If you dynamically adjust power level or packet duration in your protocol, you will need to account for that too.
Practical devices that use UnetStack often have a battery voltage parameter that provides some measure of energy available. However, this may be hard to use as battery voltage does not linearly depend on energy, but is highly dependent on the actual battery chemistry.
Something else that you might want to bear in mind in developing routing protocols that use energy information: transmitting remaining energy information from a node to neighbors takes energy! Do keep this in mind!!!
Lots of posts talk about the gyro drift problem. Some guys say that the gyro reading has drift, however others say the integration has drift.
The raw gyro reading has drift[link].
The integration has drift[link](Answer1).
So, I conduct one experiment. The next two figures are what I got. The following figure shows that gyro reading doesn't drift at all, but has the offset. Because of the offset, the integration is horrible. So it seems that the integration is the drift, is it?
The next figure shows that when the offset is reduced the integration doesn't drift at all.
In addition, I conducted another experiment. First, I put the mobile phone stationary on the desk for about 10s. Then rotated it to the left then restore to back. Then right and back. The following figure tells the angle quite well. What I used is only reducing the offset then take the integration.
So, my big problem here is that maybe the offset is the essence of the gyro drift(integration drift)? Can complimentary filter or kalman filter be applied to remove the gyro drift in this condition?
Any help is appreciated.
If the gyro reading has "drift", it is called bias and not drift.
The drift is due to the integration and it occurs even if the bias is exactly zero. The drift is because you are accumulating the white noise of the reading by integration.
For drift cancellation, I highly recommend the Direction Cosine Matrix IMU: Theory manuscript, I have implemented sensor fusion for Shimmer 2 devices based on it.
(Edit: The document is from the MatrixPilot project, which has since moved to Github, and can be found in the Downloads section of the wiki there.)
If you insist on the Kalman filter then see https://stackoverflow.com/q/5478881/341970.
By why are you implementing your own sensor fusion algorithm?
Both Android (SensorManager under Sensor.TYPE_ROTATION_VECTOR) and iPhone (Core Motion) offers its own.
The dear Ali wrote something that is really questionable and imprecise (wrong).
The drift is the integration of the bias. It is the visible "effect" of bias when you integrate. The noise - any kind of stationary noise - that has mean zero, consequently has integral zero (I am not talking of the integral of PSD, but of the additive noise of the signal integrated in time).
The bias changes in time, as a function of voltage and exercise temperature. E.g. if voltage changes (and it changes), bias changes. The bias it is not fixed nor "predictable".
That is why you can not eliminate bias using the proposed subtraction of the estimated bias by the signal. Also any estimate has an error. This error cumulates in time. If the error is lower, the effects of cumulation (the drifting) become visible in a longer interval, but it still exists.
Theory says that a total elimination of bias it is not possible, at the present days. At the state of the art, no one has still found a way to eliminate the bias - based only gyroscopes and accelerometers magnetometers - that could filter all the bias out.
Android and iPhone have limited implementations of bias elimination algorithms. They are not totally free by bias effects (e.g. in small intervals). For some applications this can cause severe problems and unpredictable results.
In this discussion both Ali and Stefano have raised two fundamental aspects of drifts due to ideal integration.
Basically zero mean white noise is an idealized concept and even for such ideal noise integration offer higher gain over lower frequency component of noise, which introduces a low frequency drift in the integrated signal. By theory the zero mean noise should not cause any drift iff observed over significantly long time but practically ideal integration never works.
On the other hand, even a minor dc-offset in the reading (input signal) can cause a significant drift over a time, if an ideal integration (loss-less summation) is performed on it. It can ramp up a very small dc-offsets in the system, as ideal integration has infinite gain on DC component of an input signal. Therefore for the practical purpose we substitute ideal integration by a low pass filter whose cut-off can be as low as required but can not be zero or too low for practical purpose.
Motivated by Ali reply (thanks Ali!), I did some reading and some numerical experiments and decided to post my own reply about the nature of gyro drift.
I've written a simple octave online script plotting white noise and integrated white noise:
The angle plot with reduced offset that is shown in the question seems to resemble a typical random walk. Mathematical random walks has zero mean value, so that cannot be accounted as drift. However, I believe numerical integration of white noise leads to non-zero mean (as can be seen in the histogram plot for random walk below). This, together with linearly increasing variance could be associated to the so-called gyro drift.
There is a great introduction to errors arising from gyroscopes and accelerometers here. In any case, I still have much to learn, so I could be wrong.
Regarding the complimentary filter, there's some discussion here, showing how the gyro drift is reduced by it. The article is very informal, but I found it interesting.