Root delay and Root dispersion values in NTP protocol? - ntp

I'm implementing NTP Server and client (for the first time). I've few questions for which I couldn't found detailed explanation. Please help me with the below topics.
1. What is root dispersion ,precision and poll fields in the NTP packet format
2. Can i assign any values to them or shall i need to calculate?
Please suggest me....

So a good place to start would be the Official NTP Docs
rootdisp - total dispersion to the primary reference clock
precision - precision (log2 s)
poll - poll messages sent (every n sec)
You don't directly assign any of those values - you need to read up about the ntp.conf file and which directives you should use there for your specific environment.
You may also want to look at an answer I gave on Superuser
You didn't say which OS you are using, or which ntp client so this is all generic stuff based on linux. If you would like specific help then please expand your question with some examples and some of the reference material you have already looked at else it will be very hard to help.

Related

How to set iBeacon TX power byte

I am working on the ESP32 microcontroller and I would like to implement iBeacon advertising feature. I have been reading about the iBeacon. I have learnt about the specific format that iBeacon packet uses:
https://os.mbed.com/blog/entry/BLE-Beacons-URIBeacon-AltBeacons-iBeacon/
From what I understand, iBeacon preset is set and not meant to be modified. I must set a custom UUID, major and minor numbers such as:
uint8_t Beacon_UUID[16] = {0x00,0x11,0x22,0x33,0x44,0x55,0x66,0x77,0x88,0x99,0xAA,0xBB,0xCC,0xDD,0xEE,0xFF};
uint8_t Beacon_MAJOR[2] = {0x12,0x34};
uint8_t Beacon_MINOR[2] = {0x56,0x78};
The only thing that I am confused about is the TX Power byte. What should I set it to?
According to the website that I have referred above:
Blockquote
A scanning application reads the UUID, major number and minor number and references them against a database to get information about the beacon; the beacon itself carries no descriptive information - it requires this external database to be useful. The TX power field is used with the measured signal strength to determine how far away the beacon is from the smart phone. Please note that TxPower must be calibrated on a beacon-by-beacon basis by the user to be accurate.
Blockquote
It mentions what is TxPower and how it should be determined but I still cannot make any sense out of it. Why would I need to measure how far away the beacon is from the smart phone if? That should be done by the iBeacon scanner not the advertiser(me).
When you are making a hardware device transmit iBeacon, it is your responsibility to measure the output power of the transmitter and put the corresponding value into the TxPower byte of the iBeacon transmission.
Why? Because receiving applications that detect your beacon need to know how strong your transmitter is to estimate distance. Otherwise there would be no way for the receiving application to tell if a medium signal level like -75 dB is from a nearby weak transmitter or a far away strong transmitter.
The basic procedure is to put a receiver exactly one meter away from your transmitter and measure the RSSI at that distance. The one meter RSSI is what you put into TxPower byte of the iBeacon advertisement.
The specifics of how to measure this properly can be a bit tricky, because every receiver has a different "specificity" meaning they will read a higher or lower RSSI depending on their antenna gain. When Apple came out with iBeacon several years ago, they declared the reference receiver an iPhone 4S -- this was the newest phone available at that time. You would run beacon detection app like AirLocate (not available in the App Store) or my Beacon Locate (available in the App Store). The basic procedure is to aim the back of the phone at the beacon when it is exactly one meter away and use the app to measure the RSSI. Many detector apps have a "calibrate" feature which averages RSSI measurements over 30 seconds or so. For best results when calibrating, do this with both transmitter and receiver at least 3 feet above the ground and minimize metal or dense walls nearby. Ideally, you would do this outdoors using two plastic tripods (or do the same inside an antenna chamber.)
It is hard to find a reference iPhone 4S these days, and other iPhone models can measure surprisingly different RSSI values. My tests show that an iPhone SE 2nd edition measures signals very similarly to an iPhone 4S. But even these models are not made anymore. If you cannot get one of these, use the oldest iPhone you can get without a case and take the best measurement you can as described above. Obviously a ideal measurement requires more effort -- you have to decide how much effort you are willing to put into this. An ideal measurement is only important if you expect receiving applications to want to get the best distance measurements possible.

How to Total amounts looping through ServiceBus Queue in Azure logic app

I have a Azure Logic app that will pull all messages of a queue once a day. In the Loop action, I can extract the amount and convert to float to extract the value, however, I am not sure how to create a running total of all of the messages in the queue. To be clear, so if there are 10 messages in the queue, and each message has an amount of $1, the running total at the end should be $10. Does anyone know how to do this ?
I have tried using the Math function add(variables('TotalPayments'), variables('PaymentAmount'))
TotalPayments - Running Total
PaymentAmount - current Payment extracted from the current Message.
Found a solution to this, What I used was to implement a increment variable and assigned the value from the current message to this variable. There could be an easier way of doing this, however, It achieves the desired result, and I spent far too much time on it. Microsoft may have meant some other way, but isn't forthcoming in the documentation (that I could find, would be gladly told otherwise). Like I indicated above, I was using the Math function add which seemed logical to me to use but that didn't work.

How to evaluate a recurrent connection in an artificial neural network?

I just can't understand how should I compute the output of a neural network, which contains a recurrent connection.
So here is an example (I can't post images yet..):
http://i.imgur.com/XdXupIj.png
(i_1,2 are the input values, w_1,2,3,r are the connection weights, and o_1 is the output value.)
For the sake of simplicity, let's say that there are no activation or transfer functions.
If I understand the workings of ANNs correctly, then in case of not taking the red recurrent connection into consideration, the output is calculated as
o_1=(w_1*i_1+w_2*i_2)*w_3
However, what is the case when the red connection is taken into account? Would it be
o_1=((w_1*i_1+w_2*i_2)+(w_1*i_1+w_2*i_2)*w_r)*w_3
maybe? But that's just my guess.
Thanks in advance.
The RNN is not a usual network. The usual network has no time, but RNN has time. The digitalized signals go to the input of the net. So for example, for i_1 we have not one value, but signal i_1[t=0], i_1[t=1],i_1[t=2], … The red connection has delay inside itself and the delay is one unit of time. Thus, to calculate the output of the H1 you need to use the following recurrent formula:
o[t]=w_1*i_1[t]+w_2*i_2[t])+o[t-1]*w_r
You see here o[t-1] that means delay at one unit of time.
Speaking about recurrent neural networks, you may find many examples of using it. Recently we have participated in machine learning contest and tried to use the RNN for classification of EEG signals, but faced with some obstacles. Here are the details: http://rnd.azoft.com/classification-eeg-signals-brain-computer-interface/.
A recurrent neural network (RNN) is a class of artificial neural network where connections between units form a directed cycle. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs.
To me, it seems like :
o_1=(w_1*i_1+w_2*i_2)*w_r*w_3
Note: Please note if this is homework.

XenServer C SDK units for utilization

I am using the Citrix XenServer C SDK to obtain values of host_cpu utlization.
Any idea what units this data is represented in the test/test_get_records.c
Generates the following output.. I am expecting percentile:
Please comment on how this figure relates to a percentage value.
Looking at the header file, it is a double.
http://opensrcd.ca.com/ips/07400_4/include/xen/api/xen_host_cpu.h
Depending on your machine you might try %llf instead of %lf but...
I think you are supposed to use their helper functions to access the data. eg:
/**
* Get the utilisation field of the given host_cpu.
*/
extern bool
xen_host_cpu_get_utilisation(xen_session *session, double *result, xen_host_cpu host_cpu);
XenServer has obsoleted these methods. It is however possible to use the CLI to run an script on the xenserver using xe commands to obtain some these utilization data.
However, the data obtained are in raw-format: you would have to average the values of all cpu cores to get the overall CPU usage.

LPC17xx: Check if RTC is running

I am using NXP LPC17xx family microcontrollers (LPC1759 and LPC1768).
How can I determine if RTC is running for sure?
I am doing a test on
LPC_RTC->CCR & RTC_CCR_CLKEN
but it seems no much reliable.
I have seen values in year 3197 or so when turning on my device.
How can I tell if RTC is running and its value is not corrupt?
EDIT:
I ended up adding a simple sanity check in RTC values:
bool DateTime::validate( const RTC_TIME_Type &time_info )
{
if ( time_info.YEAR > 2100
|| time_info.DOY > 366
|| time_info.MONTH > 12
|| time_info.DOM > 31
|| time_info.HOUR > 23
|| time_info.MIN > 59
|| time_info.SEC > 59 )
return false;
return true;
}
It is run during my POST, as suggested bellow.
I battled with the RTC on that chip's grandfather (LPC2148) about 5 years ago. If you look on the Yahoo LPC2000 group (it also covers the LPC1000 chips) you'll see the RTC & its issues come up a lot.
Anyway, I'm going from memory here, but I think I concluded that reading the status register wasn't reliable enough. Maybe the issue was that when power was removed, if the battery backup was absent, things would get scrambled...
So what I recall I did was the following, during the boot phase:
(1) Enable RTC peripheral
(2) Read ALL RTC registers. In firmware, have "out of bounds" min & max values for each field (e.g. year must be at least 2005, and no larger than 2030)
(3) If any value is out of range, reset date & time to some hard-coded value (e.g. Jan. 1, 2005) (product would let user adjust time/date once booted)
(4) Take snapshot of registers; wait AT LEAST one second (use timer peripheral to measure time), then make sure the value changed. I might have gone so far during boot-up as to set the values so that a 1-second tick would/should cause everything to roll over (probably 1 second before midnight, Dec. 31), ensure everything changes, then write back the original value + 1 second. (You would want to do this right as the value changes to avoid slipping seconds)
I will try to dig up the code and see if there was something more. I just recall finally concluding I had to run the damn thing & watch it work before I passed the P.O.S.T. for that peripheral.
(I kind of mentioned it in passing, but just to re-iterate... if your values seem corrupted on power-on, make sure your battery back-up circuitry is rock solid - even a basic circuit with a pair of diodes normally suffices. Could be that the clock is running when the product is running, but that when power is removed, its brains get scrambled.)
Also confronted with temperamental rtc's here....
I don't think a really reliable test is possible, you could store the last recorded time somewhere in non volatile memory, and check that the clock hasn't moved to a past date, and you can also test that the delta between two checks is not too big. That would catch something like the year 3000, but you cannot reduce the tested time lapse to say 1 month - you want the thing to wake up even if it was shelved for say a year.
Do you have the possibility to consult a time source at startup? Eg an ntp server or some other device that your controller speaks with that can be considered synchronized with a reliable time source?
You can route the RTC clock to an external pin and watch it on an oscilloscope or a logic analyzer.
IIRC I did just that for LPC1766/1768 (I have two identical boards populated with different processors).

Resources