I have a silverlight application which users will be running in various time zones.
These applications load their data from the server upon start up, then cache it in IsolatedStorage.
When I make changes to the data on the server, I want to be able to change the "last updated time" so that all silverlight clients download the newest data the next time they check this date.
However, I'm a bit confused as to how to handle the time zone issue since a if the server is in New York and the update time is set to 2010-01-01 17:00:00 and a client in Seattle checks compares it to its local time of 2010-01-01 14:00:00 it won't update and will continue to provide old data for three more hours.
My solution is to always post the update time in UTC time, not with the time on the server, then make the Silverlight app check with DateTime.UtcNow.
Is this as easy as it sounds or are their issues with this, e.g. that timezones are not set correctly on computers and hence the SilverlightApp does not report the correct UTC time. Can anyone say from experience how likely it is that using DateTime.UtcNow like this for cache refreshing will work in all cases?
If DateTime.UtcNow is not reliable, I will just use an incremented "DataVersion" integer but there are other scenarios in which getting time zone sychronization down would make it useful to thoroughly understand how to solve this in silverlight apps.
DateTime.UtcNow is as reliable as the clock on the client system. So the question is entirely independent of Silverlight or .NET, the question is how much do you trust the system clock on the client machines?
You need to weigh the risk that a user of a machine may have incorrectly set the time on their machine because they have not set the time zone correctly. This risk is entirely human in nature.
Using an incrementing version number only has one downside, you need to first retrieve the current value before you can set a new one. If that isn't a problem then go with that and eliminate the FUD you might have around time zones.
Related
I have got a bug case from the service desk, which was a result of different system times on the application server (JBoss) and DB(Oracle) server. As a result, timeouts lied.
It doesn't happen often, but for the future, it will be better if the app server could raise alarm about the bad time on the DB server before it results in some deeper problems.
Of course, I can simply read
Select CURRENT_TIMESTAMP
, and compare it against the local time. But it is probable that the time of sending the query and getting its result will get some noticeable time and I will recognize good time as bad one or vice versa.
I can also check the time from sending the query to the return of the result. But this way will work correctly in the case of the good net without lags. And if the time on the DB server fails, it is highly probable that the net around the DB server is not OK. The queues on the DB server can make the times of sending and receiving noticeably unequal.
What is the best way you know to check the time on the DB server?
Limitations: preciseness of 5 sec
false alarms <10%
To be optimized(minimized): lost alarms.
Maybe I am inventing the bicycle and JBoss and/or Oracle have some tool for that? (I could not find it)
Have a program running on the app server get the current time there, then query the database time (CURRENT_TIMESTAMP) and the app server gets the current time there after the query returns.
Confirm that the DB time is between the two times on the App Server (with any tolerance you need). You can include a separate check on how long it took to get the response from the DB but it should be trivial.
If the environment is some form of VM, issues are most likely to arise when the VM is started or resumed from a pause. There might be situations where a clock is running fast or slow so recording the times would allow you to look for trends in either direction and allow you to take preemptive action.
I am trying to design an application that allows Support personnel to show system tray alerts between a certain time window in future on all user machines.
So we have clients in multiple time-zones checking in to a central server looking for any new alerts to be displayed.
The central DB has a table that stores the alert details with their start/end time and a webservice that responds to the clients by checking this table.
The challenge is that the client as well as the future trigger time on the server can be in any user-specified timezone.
Based on a few threads that I read here, the best practice is to store the start/end time in UTC in the database and convert it to the client's timezone when a request comes in.
That would mean converting all start/end times to client's timezone every few mins. I am worried this would be a major performance issue on the central server. Handling Daylight Savings times is also another point to consider.
Is there a smarter way to handle this? Any best practices to handle such scenarios would be really help.
Assuming you are using SQL, then you'd want to use DateTimeOffset as the data type. This type is capable of storing & manipulating both timezone and DST information.
Please see the accepted answer in this thread for the canonical answer to this class of issues: DateTime vs DateTimeOffset
I am implementing a basic sync strategy for a multi-client application that needs to support offline data access. I am using #Chris' suggestion in his answer to this question (not required reading).
One detail I would like to add is the ability to resolve conflicts based on the last change saved, not the last change synced. In other words, if two clients update the same item, the client that saved the change last should win, even if the other client syncs later.
Clearly I need some way to timestamp each change on the client, so I can compare the stamps on the server at the time of sync. However, I can't guarantee much about each client's internal clock.
I would like to know if there is an established way to solve this? The simpler the better!
If you're asking about client clock hijacking: a client should maintain it's own internal clock based on timestamp it gets from server and time span got from local clock.
So you just update 'client timestamps' relatively to server:
Client record has CT1 update time;
After connecting to server at the moment of CT2, you find out that server time is ST2;
So record update time is changed to ST1 = ST2 - CT2 + CT1.
The other way is maintaining the same transformation at server side. (Which is probably more correct and secure).
And sorry - just a note - odd part is that you call it 'conflict resolution', when it's more 'last update wins' and no actual resolution is performed.
I'd not be happy trying to get away with doing it based on just timestamps, I think you need to be looking at a proper versioning solution. I don't know what language you are using etc, but I have a complete library for doing this. Even if the library is not interesting you might find the documentation for it is useful in constructing your own solution... it's pretty fully explained...
The project is on GitHub.
On the system I am developing, we have a PostgreSQL database that contains set up data which when updated must be transfered to handsets when and while those handsets are "docked". While the handsets are docked, our "service software" can talk to them, but not while they are undocked (they are not wireless).
At the moment, the service software that the handsets talk to loads the set up data from the database on startup and caches it. Thereafter it queries the latest timestamp of the setup data every 5 seconds and reloads parts of the set up if the timestamp queried is higher than the latest cached timestamp.
However, I find this method haphazard. It may be possible to miss an update for instance if an update transaction takes longer than a second, or at least if the period between submitting the transaction and completion of the transaction takes it over a 1-second boundary (the now() function is resolved at the beginning of the transaction by PostgreSQL). The only way I can think of round that is to do a table level lock before querying the latest timestamp. I'm not a fan of table locks but it is the only way I can think of to get round the problem.
Another problem with this approach is, I have to query for new data based on the update timetamp being >= the last latest timestamp, as opposed to just > the last latest timestamp. Why? Because a record may of been committed within the same second, just after my query - so I'd miss the record.
Another approach I've thought of is, storing "last synchronised date-time" data in the database for each logical item of data that must be stored on the handsets. I would do this on a per handset basis. I can then periodically query for all data not currently synchronised on a particular handset, and then mark it as synchronised once the handset is up to date (I have worked out a mechanism for this to be failsafe which takes into account the data being updated during synchronisation).
My only problem with this approach is that it means the database is storing non-business centric data - as in it is storing data to make the system work. I'm not convinced data about what handsets are in sync is "business" data. To me it is more the responsibility of the handset service software / handset software to know how to keep itself up to date, though it is tempting as it describes perfectly what data is and is not on each handset and allows queries to only return the data needed.
The first approach however at least only uses data appropriate to the business - i.e. the timestamp of when the data was last changed.
The ideal way would be to use some kind of notification system, but unfortunately postgres only has a basic NOTIFY / EVENT system and that doesn't seem to work over ODBC (which I foolishly decided to use and do not have time to change just now). If I was using Oracle I could use Streams..
Thoughts?
Note: The database is purely relational - I am not interested in any "object oriented" approaches to this problem or any framework based solutions.
Thanks.
First of all, if you are using PostgreSQL version at least 7.2, the now function returns values with microsecond precision rather that second precision; although the value is ultimately derived from the operating system and will be accurate only up to a few hundredths of a second.
The method that you describe appears to be safe against permanently missing any updates. Just make sure that you reload data every time unless the timestamps prove that you have reloaded long enough after the last update. Alternately, you could update a timestamp upon data update in a separate transaction; in that case, ever seeing such a timestamp is a proof that all updates had finished before the timestamp value.
Another approach I've thought of is, storing "last synchronised
date-time" data in the database for each logical item of data that
must be stored on the handsets. I would do this on a per handset
basis. I can then periodically query for all data not currently
synchronised on a particular handset, and then mark it as synchronised
once the handset is up to date
I can not recommend this for the following reasons:
As synchronization is a state of a handset and not a state of the data, this information should better be stored on the handset.
The database should be scalable to many handsets and it ideally should not have to keep track of them.
If a handset can change its identity, or be wiped or restored (reimaged) to a previous state without changing its identity, the database will get out of sync with the real state of the headset and no mechanism will ensure proper synchronization.
While NOTIFY is certainly preferable to constant polling, it is a problem orthogonal to where you store the synchronization progress. You still need to have a polling capability to be able to deal with a freshly connected devide, and notifications would be just a bandwidth/latency optimization.
I have a database, which is a part of a Library Information system. It keeps track of the books borrowed by customers, keeping the due dates and automating the notification of accountability of customers, if a customer has returned a book beyond their due date.
Now, I am using MySQL for the DBMS. What I know is that MySQL's time is dependent on the system time. When checking if a borrowed book has already passed its due date, I would compare the current System time with the due date value associated to the borrowed book. Yeah, the database server will actually be running on a PC running winXP.
My problem is, when the system time gets changed, integrity of the data and checking of accountability gets compromised. Is there a way to work around this? Is there a sort of 'independent time' that I could use? Thanks a lot!
NOTE: Yeah, I'm afraid the application does not have a connection to the Internet.
I think you're trying to program around a problem your application shouldn't worry about. Your app gets time from the computer, you need to be able to rely upon that for accuracy. If the time gets changed, then the time was wrong, so what does that mean for old data? How long was it wrong? It's really not something you can solve programmatically.
A better solution is to make sure the time isn't wrong. Use windows time to sync against a time server to ensure accuracy.
If your PC is running within a Windows domain service, you could also choose to have your computer clock constantly synchronize its time with your domain server using the Windows Time Service.
If your PC has internet access, it can actually set its time against US National Institute of Standards Technology time service. Instructions and overview of how to use it can be found at the NIST Internet Time website.
I would configure an authoritative time server in windows XP. Here is a step by step process.