I am working on time critical operation in c with arduino. I am reading time from RTC. My algorithm needs
1. local solar time
2. local time
3. local standard time meridian
4. equation of time.
5. Greenwich mean time.
I read date and time from RTC . How can use time to convert to calculation for above format.
Related
I want to use a timer to read the data from a simulink block to the workspace during simulation.
I made a simple mdl model composed of a clock connected to a scope.
Then I wrote this simple code:
t=timer('period', 1, 'taskstoexecute', 10, 'executionmode', 'fixedrate');
t.Timerfcn={#TimeStep};
start(t)
function time = TimeStep (~,~)
load_system('mymodel');
set_param('mymodel','SimulationCommand','start');
block='mymodel/Clock';
rto=get_param(block,'runtimeObject');
time=rto.OutputPort(1).Data;
disp(time);
The problem is that when I run the code for simulation time 10, it shows me "0" in work space and repeat it ten times. I assume that it should show me the time from 1 to 10. I have also modifies the solver to a discrete solver with time step=1.
The other thing I do not understand is that when I put a ramp function instead of the clock and change it to:
block='mymodel/Ramp';'
then I receive an error of "too many inputs".
I would appreciate any help.
You have two things that count time and seem to think that one of them is controlling the time in the other. It isn't.
More specifically, you have
A 'Timer' in MATLAB that you have asked to run a certain piece of code once per second over 10 seconds. (Both times are measured in wall clock time.)
Some MATLAB code that loads a Simulink model (if it isn't already loaded); starts the model (if it isn't already started); and gets the value on the output of a Clock block that is in the model (it does this only once each time the code is executed). As with every Simulink model, it will execute as fast as it can until the simulation end time is reached (or something else stops it).
So, in your case, each time the Timer executes, the simulation is started, the value at the output of the clock is obtained/printed (which since it happens very quickly after the start of the model it prints out that the simulation time is 0); and then (because you have a very simple simulation that takes no time at all to finish) the simulation terminates.
The above happens 10 times, each time printing the value of the clock at the start of the simulation, i.e. 0.
To see other values you need to make your simulation run longer - taking at least 1 second of wall clock time. For instance, if you change the solver to fixed step and put in a very small step size, something like 0.000001, then the simulation will probably take several seconds (of wall clock time) to execute.
Now you should see the Timer print different times as sometimes the model will still be executing when the code is called (1 second of wall clock time later).
But fundamentally you need to understand that the Timer is not controlling, and is not dependent upon, the simulation time, and vice-versa.
(I'm not sure about the issue with using Ramp but suspect it's got to with Clock being a fundamental block while Ramp is a masked subsystem.)
In simulink if I run any simulation, it follows an internal clock. I want to run these simulations in real time.
Example: if I use a PWM pulse generator and give it a sample time of 1 second, I expect that it will generate a sample at the end of every one second real-time but the simulink clock moves very very fast (every one second real time corresponds to about 1e6 seconds smulink time). Is there any way to synchronize the simulink clock with the real time clock?
I actually need to give input to hardware at the end of every 2 seconds in a loop and that is why this kind of synchronization is needed.
Firstly note that Simulink is not a real-time environment, so anything you do related to this is not guaranteed to be anything but approximate in the timing achieved.
If your model runs faster than real-time then it can be paused at each time step until clock-time and simulation time are (approximately) equal. This is achieved by writing an S-Function.
There are several examples of doing this. For instance here or here.
Short question: How to get seconds since reset in STM32L051T6 microcontroller?
My effort and detailed issue:
I am using an STM32L051T6 series microcontroller. I need to count seconds since power on. I am also using low power mode. So I wrote code to use wakeup timer interrupt functionality of internal RTC of microcontroller. I used 1 second interval wake up timer with external LSE clock of 32768 Hz. I observed the accumulated seconds since power on (SSPO) after 3 days and found that it is falling behind by 115 seconds compared to actual time elapsed. My guess for this drift is interrupt latency in executing wakeup timer interrupt. How can I remove drift of this 115 seconds? Or is there any other better method than using wakeup interrupt to count seconds since power on?
UPDATE:
I tried to use Systick with HAL_GetTick() function as seconds since power on. But even systick is also getting delayed over time.
If you want to measure time with accuracy over a longer period, an RTC is the way to go. As you mentioned that you have an RTC, you can use the method below.
At startup, load the RTC with zero.
Then you can read the seconds elapsed when required without the errors above.
Edit: As per comment, the RTC can be changed by user. In that case,
If you can modify the RTC write function called by the user, then when the user calls the RTC write function, you update a global variable VarA = time set by user. The elapsed time will be Time read by RTC - VarA.
If the RTC is accurate, you should use the RTC by storing its value at boot time and later comparing to that saved value. But you said that the RTC can be reset by user so I can see two ways to cope with it:
if you have enough control on the system, replace the command or IHM that a user can use to reset the clock with a wrapper that inform you module and allows to read the RTC before and after it has been reset
if you have not enough control or cannot wrap the user's reset (because it uses a direct system call, etc.) use a timer to control the RTC value on every second
But you should define a threshold on the delta on RTC clock. If it is small, it is likely to be an adjustment because unless your system uses an atomic clock, even RTC can derive over time. In that case I would not care because you can hardly know whether it derived since last reboot or not. If you want a more clever algorythm, you can make the threshold dependant on the current time since last reboot: the longer the system is up, the higher the probability it has derived since then.
On the opposite, a large delta is likely to be a correction because RTC was blatantly erroneous, the saving battery is out of use, or what else. In that case you should compute the new start RTC time that gives same duration with the new RTC value.
As a rule of thumb, I would use a threshold of about 1 or 2 seconds per uptime day without RTC clock adjustement (ref) - meaning I would also store the time of last RTC adjustement, initially boot time.
I have a .pkt sniffer capture. When I read each and every packet within the capture from my C application, I observe a radio header appended. The radio header contains the time in epoch for each and every packet. I would like to find out the time difference between two packets in terms of milliseconds. I am not sure how to diff two epoch values and find out the time difference in milliseconds. Please help me with this.
I have a .pkt sniffer capture.
What is a ".pkt sniffer capture"? Did you use Wireshark to capture the packets? If so, then it's either a pcap or a pcap-ng capture.
The radio header contains the time in epoch for each and every packet.
In most sniffer formats, the time stamp for packets is part of a "record header", not part of a radio-information header. For 802.11 captures, some capture file formats might provide a radio-information header that includes the 802.11 Timing Synchronization Function timer, but that timer doesn't have a specified epoch.
The epoch for the packet time stamp depends on the capture file format. pcap and pcap-ng use the UN*X epoch of January 1, 1970, 00:00:00 UTC, as pcap was originally used on UN*X and pcap-ng is influenced by pcap. Other file formats might, for example, use the Windows FILETIME epoch of January 1, 1601, 00:00:00 "UTC" (using the proleptic Gregorian calendar), or some other time origin.
But if all you want is to
find out the time difference between two packets in terms of milliseconds
then the epoch is irrelevant - if a time is represented as "X seconds since some epoch) (where X isn't necessarily an integer; it could have a fractional part), then the time between "X seconds since the epoch" and "Y seconds since the epoch" is X - Y, and the epoch itself cancels out.
The big issue, then, is how is "seconds since some epoch" represented. An integral count of seconds? An integral count of {nanoseconds, tenths of microseconds, microseconds, milliseconds, etc.}? A combination of seconds and {nanoseconds, microseconds, etc.}? That's why we'd need to know whether this is a file-format time stamp or an 802.11 TSFT and, if it's a file-format time stamp, what file format that is.
If, as you seem to indicate, this is a Wireshark capture (i.e., a capture made with Wireshark, not just a capture that Wireshark happens to be able to read - it can read captures from programs that don't use its native pcap or pcap-ng format), then the file format is pcap or pcap-ng (in which case the epoch is the Epoch, i.e. January 1, 1970, 00:00:00 UTS) and the time stamp is either 32-bit seconds since the Epoch and 32-bit microseconds since that second, for pcap, or a 64-bit count of units since the Epoch (the units are specified by the Interface Description Block for the interface on which the packet in question was captured; the default is microseconds).
To calculate the difference between the time stamps of two pcap packets, in microseconds, take the difference between the seconds values, multiply it by 1,000,000, and add to it the difference between the microseconds values (both differences are signed). To convert that to milliseconds, divide it by 1,000.
To calculate the difference between the time stamps of two pcap-ng packets, take the difference between the time stamp values; that's a count of fractions of a second, where the fraction is defined by the specified value in the Interface Description Block. To convert that to milliseconds, adjust as appropriate based on what the fraction is (for example, if it's microseconds, divide it by 1,000).
To calculate the difference between the TSFT values of two 802.11 packets, just subtract one value from the other; the TSFT value, at least for radiotap, is in microseconds, so, to convert it to milliseconds, divide it by 1,000.
"Epoch" isn't a unit or format; it's a point in time. Specifically, it's midnight UTC of January 1st, 1970.
Unix timestamps are just the number of seconds that have passed since that time. Subtract the smaller one from the larger to find the difference in seconds, and multiply by 1000 to get the number of milliseconds.
I wish to write a C program which obtains the system time and hence
uses this time to print out its ntp equivalent.
Am I right in saying that the following is correct for the seconds part
of the ntp time?
long int ntp.seconds = time(NULL) + 2208988800;
How might I calculate the fraction part?
The fractional part to add obviously is 0ps ... ;-)
So the question for the fraction could be reduced to how accurate is the system clock.
gettimeofday() gets you micro seconds. clock_gettime() could get you nano seconds.
Anyhow I doubt you'll be reaching the theoratically possible resolution the 32bit wide value for the fraction allows (at least on a standard PC).