Looking for a description of "sensors" column in glances system monitoring tool - benchmarking

glances provides a "top-like" display with a list of sensors and what temperature those sensors are reporting. One in particular is named "edge". Can someone explain where or what this sensor is?
I ran some benchmarking software on my older amd gpu (rx590) and my cpu fan starts spinning very fast but cpu Composite temperature is in the 40C range. cpu usage is minimal.
This sensor marked as "edge" shows a temperature around 75C.
Thanks in advance

Apparently the 'edge' sensor is the Video Card temperature.
I don't have any documentation to reference but whenever the gpu is very busy the temperature increases.

Related

Information about energy of a node

I want to get the information about the energy in the node, so those neighbouring nodes can reroute the data packets when the neighbouring node energy is less.
Currently UnetStack simulator doesn't provide energy measurements directly. However, it's not hard to do yourself for simulations. See this discussion for some suggestions:
https://unetstack.net/support/viewtopic.php?id=81:
The current version of UnetStack does not have any energy model per se. But the trace and logs provide you all the information you'll need (transmit/receive counts, simulation time) to compute the energy consumption. Specifically, you'd want to assign some energy per packet transmission, some energy per packet reception, and some power consumption while idling. If you dynamically adjust power level or packet duration in your protocol, you will need to account for that too.
Practical devices that use UnetStack often have a battery voltage parameter that provides some measure of energy available. However, this may be hard to use as battery voltage does not linearly depend on energy, but is highly dependent on the actual battery chemistry.
Something else that you might want to bear in mind in developing routing protocols that use energy information: transmitting remaining energy information from a node to neighbors takes energy! Do keep this in mind!!!

NTP and RTC HW Clock weird results

In an attempt to make the system time on an ODroid as close to realtime as possible, I've tried adding a real time clock to the ODroid. The RTC has an accuracy of +/- 4ppm.
Without the realtimeclock, I would get results like this (Synced with NTP-server every 60 seconds). The blue is an Orange Pi for comparison. The x-axis is the sample, and the y-axis is the offset reported by the NTP-server in ms.
So what I tried, was the same thing (Though more samples, but same interval), but instead of just syncing with the NTP-server, I did the following:
Set the system time to the hw-clock time.
Sync with the NTP-server to update the system time, and record the offset given by the server
Update the HW-clock to the systemtime, since it has just been synced to realtime.
Then I wait 60 seconds and repeat. I didn't expect it to be perfect, but what I got shocked me a little bit.
What in the world am I looking at? The jitter becomes less and less, and follows an almost straight line, but when it reaches the perfect time (about 410 minutes in....), it the seems to continue, and let the jitter and offset grow again.
Can anyone explain this, or maybe tell me what I'm doing wrong?
This is weird!
So you are plotting the difference between your RTC time and the NTP server time. Where is the NTP server located? In the second plot you are working in a range of a couple hundred ms. NTP has accuracy limitations. From wikipedia:
https://en.wikipedia.org/wiki/Network_Time_Protocol
NTP can usually maintain time to within tens of milliseconds over the
public Internet, and can achieve better than one millisecond accuracy
in local area networks under ideal conditions. Asymmetric routes and
network congestion can cause errors of 100 ms or more
Your data is a bit weird looking though.

Data compression techniques for power plant data

I am studying about data management recently by myself. After reading some time, I still did not get the whole picture of how data is flowing from data acquisition to database or warehouse.
In power plant, I have 1000 sensors installed, so I want to know what happened before data is stored in database. For instance, from sensor data is sampled with 1Hz frequency, then with this big amount of data we need to do data compression, then send it to database I guess...So I want to know how those are all done, especially with the data compression, if the data are digital value with time stamp, what kind of data compression techniques can be used...and in Big Data concept, how data is compressed..
The way OSIsoft PI does this is by checking how much a collected point has deviated from the previous point. If it is a small amount then the point gets "dropped" so only meaningful data is stored. When you ask for a value at a time in which no data exists. PI interpolates it.
Data can be compressed in many ways, from zipping it up to totally custiomised solutions. In fact, for Power Plant data as you are looking at one of the larger systems is PI from OSISOFT. I used to work for a company who used them for 8 power stations. They have a totally bespoke database system where they store all their measurements. It is apparently optimised so that frequent readings from a sensor take up little space, and missing readings don't increase the space taken much. How they do it I have no idea - I expect it is proprietary and they won't tell people.
However, how data flows from sensor to database can be complex. Have a poke around the Osisoft site - they have some data available.

Mobile battery percent determining program

How does a mobile phone determines the percentage of battery left? Is it related to quantum physics?
I think that it run some kind of test to determine its efficiency at the time and on the basis of results, it determines the battery left. Please also send some coding for this.
How does a mobile phone determines the percentage of battery left?
It measures the voltage supplied by the battery. Since the battery's voltage drop usually isn't linear, the device likely does some processing to map the measured voltage onto a linear "percent lifetime remaining" scale, with 0% corresponding to the voltage at which the device can no longer operate.
Similarly the charging circuit monitors the battery voltage during charging and either stops charging or switches to a trickle charging process when the battery is "full," meaning that the voltage has stopped increasing over time.

Gyroscope drift on mobile phones

Lots of posts talk about the gyro drift problem. Some guys say that the gyro reading has drift, however others say the integration has drift.
The raw gyro reading has drift[link].
The integration has drift[link](Answer1).
So, I conduct one experiment. The next two figures are what I got. The following figure shows that gyro reading doesn't drift at all, but has the offset. Because of the offset, the integration is horrible. So it seems that the integration is the drift, is it?
The next figure shows that when the offset is reduced the integration doesn't drift at all.
In addition, I conducted another experiment. First, I put the mobile phone stationary on the desk for about 10s. Then rotated it to the left then restore to back. Then right and back. The following figure tells the angle quite well. What I used is only reducing the offset then take the integration.
So, my big problem here is that maybe the offset is the essence of the gyro drift(integration drift)? Can complimentary filter or kalman filter be applied to remove the gyro drift in this condition?
Any help is appreciated.
If the gyro reading has "drift", it is called bias and not drift.
The drift is due to the integration and it occurs even if the bias is exactly zero. The drift is because you are accumulating the white noise of the reading by integration.
For drift cancellation, I highly recommend the Direction Cosine Matrix IMU: Theory manuscript, I have implemented sensor fusion for Shimmer 2 devices based on it.
(Edit: The document is from the MatrixPilot project, which has since moved to Github, and can be found in the Downloads section of the wiki there.)
If you insist on the Kalman filter then see https://stackoverflow.com/q/5478881/341970.
By why are you implementing your own sensor fusion algorithm?
Both Android (SensorManager under Sensor.TYPE_ROTATION_VECTOR) and iPhone (Core Motion) offers its own.
The dear Ali wrote something that is really questionable and imprecise (wrong).
The drift is the integration of the bias. It is the visible "effect" of bias when you integrate. The noise - any kind of stationary noise - that has mean zero, consequently has integral zero (I am not talking of the integral of PSD, but of the additive noise of the signal integrated in time).
The bias changes in time, as a function of voltage and exercise temperature. E.g. if voltage changes (and it changes), bias changes. The bias it is not fixed nor "predictable".
That is why you can not eliminate bias using the proposed subtraction of the estimated bias by the signal. Also any estimate has an error. This error cumulates in time. If the error is lower, the effects of cumulation (the drifting) become visible in a longer interval, but it still exists.
Theory says that a total elimination of bias it is not possible, at the present days. At the state of the art, no one has still found a way to eliminate the bias - based only gyroscopes and accelerometers magnetometers - that could filter all the bias out.
Android and iPhone have limited implementations of bias elimination algorithms. They are not totally free by bias effects (e.g. in small intervals). For some applications this can cause severe problems and unpredictable results.
In this discussion both Ali and Stefano have raised two fundamental aspects of drifts due to ideal integration.
Basically zero mean white noise is an idealized concept and even for such ideal noise integration offer higher gain over lower frequency component of noise, which introduces a low frequency drift in the integrated signal. By theory the zero mean noise should not cause any drift iff observed over significantly long time but practically ideal integration never works.
On the other hand, even a minor dc-offset in the reading (input signal) can cause a significant drift over a time, if an ideal integration (loss-less summation) is performed on it. It can ramp up a very small dc-offsets in the system, as ideal integration has infinite gain on DC component of an input signal. Therefore for the practical purpose we substitute ideal integration by a low pass filter whose cut-off can be as low as required but can not be zero or too low for practical purpose.
Motivated by Ali reply (thanks Ali!), I did some reading and some numerical experiments and decided to post my own reply about the nature of gyro drift.
I've written a simple octave online script plotting white noise and integrated white noise:
The angle plot with reduced offset that is shown in the question seems to resemble a typical random walk. Mathematical random walks has zero mean value, so that cannot be accounted as drift. However, I believe numerical integration of white noise leads to non-zero mean (as can be seen in the histogram plot for random walk below). This, together with linearly increasing variance could be associated to the so-called gyro drift.
There is a great introduction to errors arising from gyroscopes and accelerometers here. In any case, I still have much to learn, so I could be wrong.
Regarding the complimentary filter, there's some discussion here, showing how the gyro drift is reduced by it. The article is very informal, but I found it interesting.

Resources