How to measure the amount of time taken to do any kind of operation inside apps such as Maya, Creo Parametric, Adobe Premier, etc - maya

I want to measure the amount of time taken to do any kind of operation inside apps such as Creo Parametric 5.0, Adobe Premiere Pro, Maya, Adobe Creative, Lightroom CC or any other design app
The idea is to measure the performance (time taken per operation) to catch performance issues.

When you create your library of action, you can create a decorator that log and time any actions so you can monitor whats going on

A method which is hacky and time consuming but generic (even for softwares not providing (good) scripting), could be to record your screen at high speed (such as 60 fps). Then you look at the frames to count the frames between giving the order (click, enter key) and the result (updated display).
The precision will be in the order of 1 / recording frequency (16 ms if recording at 60 fps). A drawback is that you are likely to measure the time of more than just the operation you are interested in, for instance if you want to bench the loading of a file, you will also measure the time it took to render it after (which may/should be negligible).
I was able to apply this method using https://github.com/SerpentAI/D3DShot (increase the framebuffer size which by default last 1 second). Note that the frame numbers when exporting to files are going backward in time.
It may be possible to make this method less hacky by using computer vision algorithms to not have to count frames manually.

Related

How to set iBeacon TX power byte

I am working on the ESP32 microcontroller and I would like to implement iBeacon advertising feature. I have been reading about the iBeacon. I have learnt about the specific format that iBeacon packet uses:
https://os.mbed.com/blog/entry/BLE-Beacons-URIBeacon-AltBeacons-iBeacon/
From what I understand, iBeacon preset is set and not meant to be modified. I must set a custom UUID, major and minor numbers such as:
uint8_t Beacon_UUID[16] = {0x00,0x11,0x22,0x33,0x44,0x55,0x66,0x77,0x88,0x99,0xAA,0xBB,0xCC,0xDD,0xEE,0xFF};
uint8_t Beacon_MAJOR[2] = {0x12,0x34};
uint8_t Beacon_MINOR[2] = {0x56,0x78};
The only thing that I am confused about is the TX Power byte. What should I set it to?
According to the website that I have referred above:
Blockquote
A scanning application reads the UUID, major number and minor number and references them against a database to get information about the beacon; the beacon itself carries no descriptive information - it requires this external database to be useful. The TX power field is used with the measured signal strength to determine how far away the beacon is from the smart phone. Please note that TxPower must be calibrated on a beacon-by-beacon basis by the user to be accurate.
Blockquote
It mentions what is TxPower and how it should be determined but I still cannot make any sense out of it. Why would I need to measure how far away the beacon is from the smart phone if? That should be done by the iBeacon scanner not the advertiser(me).
When you are making a hardware device transmit iBeacon, it is your responsibility to measure the output power of the transmitter and put the corresponding value into the TxPower byte of the iBeacon transmission.
Why? Because receiving applications that detect your beacon need to know how strong your transmitter is to estimate distance. Otherwise there would be no way for the receiving application to tell if a medium signal level like -75 dB is from a nearby weak transmitter or a far away strong transmitter.
The basic procedure is to put a receiver exactly one meter away from your transmitter and measure the RSSI at that distance. The one meter RSSI is what you put into TxPower byte of the iBeacon advertisement.
The specifics of how to measure this properly can be a bit tricky, because every receiver has a different "specificity" meaning they will read a higher or lower RSSI depending on their antenna gain. When Apple came out with iBeacon several years ago, they declared the reference receiver an iPhone 4S -- this was the newest phone available at that time. You would run beacon detection app like AirLocate (not available in the App Store) or my Beacon Locate (available in the App Store). The basic procedure is to aim the back of the phone at the beacon when it is exactly one meter away and use the app to measure the RSSI. Many detector apps have a "calibrate" feature which averages RSSI measurements over 30 seconds or so. For best results when calibrating, do this with both transmitter and receiver at least 3 feet above the ground and minimize metal or dense walls nearby. Ideally, you would do this outdoors using two plastic tripods (or do the same inside an antenna chamber.)
It is hard to find a reference iPhone 4S these days, and other iPhone models can measure surprisingly different RSSI values. My tests show that an iPhone SE 2nd edition measures signals very similarly to an iPhone 4S. But even these models are not made anymore. If you cannot get one of these, use the oldest iPhone you can get without a case and take the best measurement you can as described above. Obviously a ideal measurement requires more effort -- you have to decide how much effort you are willing to put into this. An ideal measurement is only important if you expect receiving applications to want to get the best distance measurements possible.

Flink Dashboard Throughput doesn't add up

I have two operators, a source and a map. The incoming throughput of of the map is stuck at just above 6K messages/s whereas the message count reaches the size of the whole stream (~ 350K) in under 20s (see duration). 350000/20 means that I have a throughput of at least 17500 and not 6000 as flink suggests! What's going on here?
as shown in the picture:
start time = 13:10:29
all messages are already read by = 13:10:46 (less than 20s)
I checked the flink library code and it seems that the numRecordsOutPerSecond statistic (as well as the rest similar ones) operate on a window. This means that they display average throughput but of the last X seconds. It's not the average throughput of the whole execution

Gatling simulation with fixed number of users for specific period of time

I have my Gatling scenario set up and now I want to configure a simulation with fixed number of users for specific period of time - number of users should initially be increased gradually to specific value and then kept there by adding new as required as users finish.
I specifically don't want to use constantUsersPerSec (which injects users at a constant rate) but something like .throttle(reachUsers(100) in rampUpTime, holdFor(10 minute)) which should inject users when required.
If it's still relevant: Gatling supports a throttle method pretty much as you outlined it. You can use the following building blocks (taken from the docs):
reachRps(target) in (duration): target a throughput with a ramp over a given duration.
jumpToRps(target): jump immediately to a given targeted throughput.
holdFor(duration): hold the current throughput for a given duration.
So a modified example for your use case could look something like this:
setUp(scn.inject(constantUsersPerSec(100) during(10 minutes))).throttle(
reachRps(100) in (1 minute),
holdFor(9 minute)
)

Is there an easy way to get the percentage of successful reads of last x minutes?

I have a setup with a Beaglebone Black which communicates over I²C with his slaves every second and reads data from them. Sometimes the I²C readout fails though, and I want to get statistics about these fails.
I would like to implement an algorithm which displays the percentage of successful communications of the last 5 minutes (up to 24 hours) and updates that value constantly. If I would implement that 'normally' with an array where I store success/no success of every second, that would mean a lot of wasted RAM/CPU load for a minor feature (especially if I would like to see the statistics of the last 24 hours).
Does someone know a good way to do that, or can anyone point me in the right direction?
Why don't you just implement a low-pass filter? For every successfull transfer, you push in a 1, for every failed one a 0; the result is a number between 0 and 1. Assuming that your transfers happen periodically, this works well -- and you just have to adjust the cutoff frequency of that filter to your desired "averaging duration".
However, I can't follow your RAM argument: assuming you store one byte representing success or failure per transfer, which you say happens every second, you end up with 86400B per day -- 85KB/day is really negligible.
EDIT Cutoff frequency is something from signal theory and describes the highest or lowest frequency that passes a low or high pass filter.
Implementing a low-pass filter is trivial; something like (pseudocode):
new_val = 1 //init with no failed transfers
alpha = 0.001
while(true):
old_val=new_val
success=do_transfer_and_return_1_on_success_or_0_on_failure()
new_val = alpha * success + (1-alpha) * old_val
That's a single-tap IIR (infinite impulse response) filter; single tap because there's only one alpha and thus, only one number that is stored as state.
EDIT2: the value of alpha defines the behaviour of this filter.
EDIT3: you can use a filter design tool to give you the right alpha; just set your low pass filter's cutoff frequency to something like 0.5/integrationLengthInSamples, select an order of 0 for the IIR and use an elliptic design method (most tools default to butterworth, but 0 order butterworths don't do a thing).
I'd use scipy and convert the resulting (b,a) tuple (a will be 1, here) to the correct form for this feedback form.
UPDATE In light of the comment by the OP 'determine a trend of which devices are failing' I would recommend the geometric average that Marcus Müller ꕺꕺ put forward.
ACCURATE METHOD
The method below is aimed at obtaining 'well defined' statistics for performance over time that are also useful for 'after the fact' analysis.
Notice that geometric average has a 'look back' over recent messages rather than fixed time period.
Maintain a rolling array of 24*60/5 = 288 'prior success rates' (SR[i] with i=-1, -2,...,-288) each representing a 5 minute interval in the preceding 24 hours.
That will consume about 2.5K if the elements are 64-bit doubles.
To 'effect' constant updating use an Estimated 'Current' Success Rate as follows:
ECSR = (t*S/M+(300-t)*SR[-1])/300
Where S and M are the count of errors and messages in the current (partially complete period. SR[-1] is the previous (now complete) bucket.
t is the number of seconds expired of the current bucket.
NB: When you start up you need to use 300*S/M/t.
In essence the approximation assumes the error rate was steady over the preceding 5 - 10 minutes.
To 'effect' a 24 hour look back you can either 'shuffle' the data down (by copy or memcpy()) at the end of each 5 minute interval or implement a 'circular array by keeping track of the current bucket index'.
NB: For many management/diagnostic purposes intervals of 15 minutes are often entirely adequate. You might want to make the 'grain' configurable.

Compensating for channel effects

I am trying to work on a system where the quality of a recorded sentence is rated by a computer. There are three modes under which this system operates:
When the person records a sentence using a mic and mixer arrangement.
When the user records over a landline.
When the user records over a mobile phone.
I notice that the scores I get from recordings using the above 3 sources are in the following order: Mic_score > Landline_score > mobile_score
It is likely that the above order is because of the effects of the codecs and channel characteristics. My question is:
What can be done to compensate for channel/codec introduced artifacts to get consistent scores across channels? If some sort of inverse filtering, then please provide some links where I could get started.
How do I detect what channel the input speech has been recorded on? Use HMMs?
Edit 1: I am not at liberty to go into the details of the criteria. The current scores that I get from the mic, landline and mobile (for the same sentence said (and similarly spoken over the three mediums) is something like 80, 66, 41. This difference may be because of the channel effects. If the content and manner of speaking the sentence is the same, then I am looking for an algorithm that normalizes the scores (they need not be the same, but they should be close).
It may very well be that the sound quality is different.
Have you tried listening to some examples?
You can also use any spectrum analyzer to look at that data in detail. I suggest http://www.baudline.com/. Things your should look out for: Distance between the noise floor and the speech.
Also look at the high frequency noise bursts when the letters t, f and s are spoken. In low quality lines the difference between these letters disappears.
Why do you want to skew the quality measures? Giving an objective response of the quality seems to make more sense.
The landline codec will remove all frequencies around and above 4 kHz. The cell phone codec will throw away more information as part of a lossy compression process. Unless you have another side channel of information regarding the original audio content, there is no reliable way to recover the audio that was thrown away.
You best bet to normalize is to low pass filter the audio to match the 8 kHz telco codec, and the run the result through some cellular standard compression algorithm (there may be one published for your particular mobile cellular protocol). This should reduce the quality of all 3 signals to about the same.

Resources