Information about energy of a node - unetstack

I want to get the information about the energy in the node, so those neighbouring nodes can reroute the data packets when the neighbouring node energy is less.

Currently UnetStack simulator doesn't provide energy measurements directly. However, it's not hard to do yourself for simulations. See this discussion for some suggestions:
https://unetstack.net/support/viewtopic.php?id=81:
The current version of UnetStack does not have any energy model per se. But the trace and logs provide you all the information you'll need (transmit/receive counts, simulation time) to compute the energy consumption. Specifically, you'd want to assign some energy per packet transmission, some energy per packet reception, and some power consumption while idling. If you dynamically adjust power level or packet duration in your protocol, you will need to account for that too.
Practical devices that use UnetStack often have a battery voltage parameter that provides some measure of energy available. However, this may be hard to use as battery voltage does not linearly depend on energy, but is highly dependent on the actual battery chemistry.
Something else that you might want to bear in mind in developing routing protocols that use energy information: transmitting remaining energy information from a node to neighbors takes energy! Do keep this in mind!!!

Related

Looking for a description of "sensors" column in glances system monitoring tool

glances provides a "top-like" display with a list of sensors and what temperature those sensors are reporting. One in particular is named "edge". Can someone explain where or what this sensor is?
I ran some benchmarking software on my older amd gpu (rx590) and my cpu fan starts spinning very fast but cpu Composite temperature is in the 40C range. cpu usage is minimal.
This sensor marked as "edge" shows a temperature around 75C.
Thanks in advance
Apparently the 'edge' sensor is the Video Card temperature.
I don't have any documentation to reference but whenever the gpu is very busy the temperature increases.

JMeter: how different type of timers can affect each others

I need to create a load test for a certain number of requests in a given time. I could successfully setup Precise Throughput Timer and I believe I understand how it works. What I don't understand is how other timers, specifically Gaussian Random Timer would affect it.
I have run my test plan with and without Gaussian Random Timer but I don't see that much of difference in the results. I'm wondering whether adding Gaussian Random Timer would help me to better simulate my users behavior?
I would say that these timers are mutually exclusive
Precise Throughput Timer allows you to reach and maintain the desired throughput (number of requests per given amount of time)
Gaussian Random Timer - allows you to simulate "think time"
If your goal is to mimic real users behavior as close as possible - go for the Gaussian Random Timer because real users don't hammer the application under test non-stop, they need some time to "think" between operations, i.e. locate the button and move the mouse pointer there, read something, type something, etc. So if your test assumes simulating real users using real browsers - go for Gaussian Random Timer and put realistic think times between operations. If you need your test to produce certain amount of hits per second - just increase the number of threads (virtual users) accordingly. Check out What is the Relationship Between Users and Hits Per Second? for comprehensive explanation if needed.
On the other hand Precise Thorughput Timer is handy when there are no "real users", for example you're testing an API or a database or a message queue and need to send a specific number of requests per second.

"Optimum" sampling interval of sensor system

I am working with a system, S1, where many remote devices report in to a central server at constant intervals, f. Each device reports asynchronously with the rest of the system.
The (complete) state of S1 can be queried via a request-response API.
Is there an 'optimal' frequency for another system, S2, to query S1 that balances resource consumption and concurrency between S1 and S2?
A naive reading of Nyquist-Shannon leads me to 0.5f. Is there a better alternative?
You are best off sampling at f. Unless you are really concerned about frequency response of your system (if you are, then you also need to be really concerned about phase response too, and asynchrous reporting means that you are not), you don't want to change sampling rates in the system. It is best to use the same sample rate, even if you are resampling.
If you resample f to 0.5 f, then you need to properly implement a new LPF on the data to avoid aliasing, and that doesn't sound like what you want to do.
If your data is all very slow compared to f, then you probably should reduce f if you want to reduce resource usage (i.e. broadcast or battery power).

Data compression techniques for power plant data

I am studying about data management recently by myself. After reading some time, I still did not get the whole picture of how data is flowing from data acquisition to database or warehouse.
In power plant, I have 1000 sensors installed, so I want to know what happened before data is stored in database. For instance, from sensor data is sampled with 1Hz frequency, then with this big amount of data we need to do data compression, then send it to database I guess...So I want to know how those are all done, especially with the data compression, if the data are digital value with time stamp, what kind of data compression techniques can be used...and in Big Data concept, how data is compressed..
The way OSIsoft PI does this is by checking how much a collected point has deviated from the previous point. If it is a small amount then the point gets "dropped" so only meaningful data is stored. When you ask for a value at a time in which no data exists. PI interpolates it.
Data can be compressed in many ways, from zipping it up to totally custiomised solutions. In fact, for Power Plant data as you are looking at one of the larger systems is PI from OSISOFT. I used to work for a company who used them for 8 power stations. They have a totally bespoke database system where they store all their measurements. It is apparently optimised so that frequent readings from a sensor take up little space, and missing readings don't increase the space taken much. How they do it I have no idea - I expect it is proprietary and they won't tell people.
However, how data flows from sensor to database can be complex. Have a poke around the Osisoft site - they have some data available.

Mobile battery percent determining program

How does a mobile phone determines the percentage of battery left? Is it related to quantum physics?
I think that it run some kind of test to determine its efficiency at the time and on the basis of results, it determines the battery left. Please also send some coding for this.
How does a mobile phone determines the percentage of battery left?
It measures the voltage supplied by the battery. Since the battery's voltage drop usually isn't linear, the device likely does some processing to map the measured voltage onto a linear "percent lifetime remaining" scale, with 0% corresponding to the voltage at which the device can no longer operate.
Similarly the charging circuit monitors the battery voltage during charging and either stops charging or switches to a trickle charging process when the battery is "full," meaning that the voltage has stopped increasing over time.

Resources