Why Time(UTC) difference occurs in application running in local to azure? - sql-server

In my application we storing the created datetime(In UTC) in database.This works correctly while running the application in the local machine,Same application run from azure nearly +2 min difference occurs from the local executed app.
Same issue occurs between Sql server(on-premise) and Azure Sql

a "+2 min difference" sounds like it may be due to differences in the system clocks between the two systems.
Your question doesn't specify the source of the " created datetime(in UTC) "
Is that from a database function, or from your application?
The most likely explanation for the behavior you observe is that system clocks on the two different systems are not synchronized using the same time service.
A four-dollar timex watch keeps better time than the hardware clock in a $4000 server. (I'm surprised the drift is only two minutes.) If you want the clocks on the two systems to match, there needs to be a mechanism to keep them synchronized with each other.
FOLLOWUP
I believe the answer above addressed the question you asked.
You may have some additional questions. I think the question you may be looking for an answer to might be... "How do I configure multiple Windows servers so the system clocks are synchronized?"
Some suggestions
Windows Time Service (Does Microsoft provide a mechanism?)
NTP = Network Time Protocol (Does Azure support NTP?)
time.windows.com (What is the default time source on Azure?)
once a week - (What is the default frequency ...
etc.

Related

Do App Engine Flexible Environment VM instance restarts take advantage of automatic scaling?

I'm developing my first App Engine Flexible Environment application.
The docs explain that virtual machines are restarted weekly:
VM instances are restarted on a weekly basis. During restarts
Google's management services will apply any necessary operating system
and security updates.
Will restarts result in downtime for apps with automatic scaling enabled? If so, are there any steps I can take to avoid downtime?
For example, I could frequently migrate traffic to new instances so that no instance runs for more than one week.
Well, Later I checked with the Google support team and here the recommendation from them to avoid the downtime.
My questions are:
The weekly update is not fixed in time. Maybe there is a range in time in which I should expect the reboot of the instances? (ie: every Friday during the night).
The weekly update involves all the instances, independently from when they were created? (ie: an instance created 1 hour or 1 day before the weekly update will be restarted?).
How do we suppose to handle such a problem? it returns 502 for all request in the meantime.
1.- At this moment there is no way to know when the weekly restart is going to happen. GCP determine when is necessary and it does the restart of certain instances (once per week).
2.- No, as long as you have more than 1 one instance running you won’t see all of them being restarted at the same time.
3.- What we recommend to avoid downtime due to weekly restarts is having more than 1 instance as a minimum instance. Try to set at least 2 instances as a minimum.
I hope, this information is useful to others.
The answer to your question is in the docs:
App Engine attempts to keep manual scaling instances running indefinitely, but there is no uptime guarantee. Hardware or software failures that cause early termination or frequent restarts can occur without warning and can take considerable time to resolve. Your application should be able to handle such failures.
Here are some good strategies for avoiding downtime due to instance restarts:
Use load balancing across multiple instances.
Configure more instances than required to handle normal traffic.
Write fall-back logic that uses cached results when a manual scaling instance is unavailable.
Reduce the amount of time it takes for your instances to start up and shutdown.
Duplicate the state information across more than one instance.
For long-running computations, checkpoint the state from time to time so you can resume it if it doesn't complete.

Does changing the system time adversely effect SQL Server

I can't find anything on this with Google. My SQL Server is on a VM and for some reason the system clock wanders from the Domain time, up to ~30 seconds. This happens randomly 0 to 3 times per week. I have been hounding my VM admin for months about this and he can't seem to find the cause. He has set the server to check with the domain time every 30 minutes but this does not stop the wandering, it just fixes it faster.
Luckily the system only generates a very few transactions per hour so a 30 second time jump is not likely to cause any of the records to be out of order based on the DATETIME fields.
The VM stuff is out of my hands and this has been going on for months so my question is, can changing the system time cause corruption to the SQL files or some other problem I should be keeping an eye out for?
Timekeeping in virtual machines is quite different from physical machines. Basically, on physical machines, the system clock works by counting processor cycles, but a virtual machine can't do it that way. More info here. So what you are seeing is normal behaviour for a VM, it's one of the fundamentals of virtualisation, and although it's annoying there is nothing you can do about it. We run plenty of SQL servers on VMs and yes, the clock jumps when it syncs, but it's never caused an issue to my knowledge.

App Engine Tasks with ETA are fired much later than scheduled

I am using Google App Engine Task push queues to schedule future tasks that i'd like to occur within second precision of their scheduled time.
Typically I would schedule a task 30 seconds from now, that would trigger a change of state in my system, and finally schedule another future task.
Everything works fine on my local development server.
However, now that I have deployed to the GAE servers, I notice that the scheduled tasks run late. I've seen them running even two minutes after they have been scheduled.
From the task queues admin console, it actually says for the ETA:
ETA: "2013/11/02 22:25:14 0:01:38 ago"
Creation Time: "2013/11/02 22:24:44 0:02:08 ago"
Why would this be?
I could not find any documentation about the expectation and precision of tasks scheduled by ETA.
I'm programming in python, but I doubt this makes any difference.\
In the python code, the eta parameter is documented as follows:
eta: A datetime.datetime specifying the absolute time at which the task
should be executed. Must not be specified if 'countdown' is specified.
This may be timezone-aware or timezone-naive. If None, defaults to now.
My queue Settings:
queue:
- name: mgmt
rate: 30/s
The system is under no load what so ever, except for 5 tasks that should run every 30 seconds or so.
UPDATE:
I have found https://code.google.com/p/googleappengine/issues/detail?id=4901 which is an accepted feature request for timely queues although nothing seems to have been done about it. It accepts the fact that tasks with ETA can run late even by many minutes.
What other alternative mechanisms could I use to schedule a trigger with second-precision?
GAE makes no guarantees about clock synchronization within and across their data centers; see UTC Time on Google App engine? for a related discussion. So you can't even specify the absolute time accurately, even if they made the (different) guarantee that tasks are executed within some tolerance of the target time.
If you really need this kind of precision, you could consider setting up a persistent GAE "backend" instance that synchronizes itself with a trusted external clock, and provides task queuing and execution services.
(Aside: Unfortunately, that approach introduces a single point of failure, so to fix that you could just take the next steps and build a whole cluster of these backends... But at that point you may as well look elsewhere than GAE, since you're moving away from the GAE "automatic transmission" model, toward AWS's "manual transmission" model.)
I reported the issue to the GAE team and I got the following response:
This appears to be an isolation issue. Short version: a high-traffic user is sharing underlying resources and crowding you out.
Not a very satisfying response, I know. I've corrected this instance, but these things tend to revert over time.
We have a project in the pipeline that will correct the underlying issue. Deployment is expected in January or February of 2014.
See https://code.google.com/p/googleappengine/issues/detail?id=10228
See also thread: https://code.google.com/p/googleappengine/issues/detail?id=4901
After they "corrected this instance" I did some testing for a few hours. The situation improved a little especially for tasks without ETA. But for tasks with ETA I still see at least half of them running at least 10 seconds late. This is far from reliable for my requirements
For now I decided to use my own scheduling service on a different host, until the GAE team "correct the underlying issue" and have a more predictable task scheduling system.

simple Solr deployment with two servers for redundancy

I'm deploying the Apache Solr web app in two redundant Tomcat 6 servers,
to provide redundancy and improved availability. At this point, scalability is not a issue.
I have a load balancer that can dynamically route traffic to one server or the other or both.
I know that Solr supports master/slave configuration, but that requires manual recovery if the slave receives updates during the master outage (which it will in my use case).
I'm considering a simpler approach using the ability to reload a core:
- only one of the two servers is receiving traffic at any time (the "active" instance), but both are running,
- both instances share the same index data and
- before re-routing traffic due to an outage, the now active instance is told to reload the index core(s)
Limited testing of failovers with both index reads and writes has been successful. What implications/issues am I missing?
Your thoughts and opinions welcomed.
The simple approach to redundancy your considering seems reasonable but you will not be able to use it for disaster recovery unless you can share the data/index to/from a different physical location using your NAS/SAN.
Here are some suggestions:-
Make backups for disaster recovery and test those backups work as an index could conceivably have been corrupted as there are no checksums happening internally in SOLR/Lucene. An index could get wiped or some records could get deleted and merged away without you knowing it and backups can be useful for recovering those records/docs at a later time if you need to perform an investigation.
Before you re-route traffic to the second instance I would run some queries to load caches and also to test and confirm the current index works before it goes online.
Isolate the updates to one location and process and thread to ensure transactional integrity in the event of a cutover as it could be difficult to manage consistency as SOLR does not use a vector clock to synchronize updates like some databases. I personally would keep a copy of all updates in order separately from SOLR in some other store just in case a small time window needs to be repeated.
In general, my experience with SOLR has been excellent as long as you are not using cutting edge features and plugins. I have one instance that currently has 40 million docs and an uptime of well over a year with no issues. That doesn't mean you wont have issues but gives you an idea of how stable it could be.
I hardly know anything about Solr, so I don't know the answers to some of the questions that need to be considered with this sort of setup, but I can provide some things for consideration. You will have to consider what sorts of failures you want to protect against and why and make your decision based on that. There is, after all, no perfect system.
Both instances are using the same files. If the files become corrupt or unavailable for some reason (hardware fault, software bug), the second instance is going to fail the same as the first.
On a similar note, are the files stored and accessed in such a way that they are always valid when the inactive instance reads them? Will the inactive instance try to read the files when the active instance is writing them? What would happen if it does? If the active instance is interrupted while writing the index files (power failure, network outage, disk full), what will happen when the inactive instance tries to load them? The same questions apply in reverse if the 'inactive' instance is going to be writing to the files (which isn't particularly unlikely if it wasn't designed with this use in mind; it might for example update some sort of idle statistic).
Also, reloading the indices sounds like it could be a rather time-consuming operation, and service will not be available while it is happening.
If the active instance needs to complete an orderly shutdown before the inactive instance loads the indices (perhaps due to file validity problems mentioned above), this could also be time-consuming and cause unavailability. If the active instance can't complete an orderly shutdown, you're gonna have a bad time.

Time Dependent, How?

I have a database, which is a part of a Library Information system. It keeps track of the books borrowed by customers, keeping the due dates and automating the notification of accountability of customers, if a customer has returned a book beyond their due date.
Now, I am using MySQL for the DBMS. What I know is that MySQL's time is dependent on the system time. When checking if a borrowed book has already passed its due date, I would compare the current System time with the due date value associated to the borrowed book. Yeah, the database server will actually be running on a PC running winXP.
My problem is, when the system time gets changed, integrity of the data and checking of accountability gets compromised. Is there a way to work around this? Is there a sort of 'independent time' that I could use? Thanks a lot!
NOTE: Yeah, I'm afraid the application does not have a connection to the Internet.
I think you're trying to program around a problem your application shouldn't worry about. Your app gets time from the computer, you need to be able to rely upon that for accuracy. If the time gets changed, then the time was wrong, so what does that mean for old data? How long was it wrong? It's really not something you can solve programmatically.
A better solution is to make sure the time isn't wrong. Use windows time to sync against a time server to ensure accuracy.
If your PC is running within a Windows domain service, you could also choose to have your computer clock constantly synchronize its time with your domain server using the Windows Time Service.
If your PC has internet access, it can actually set its time against US National Institute of Standards Technology time service. Instructions and overview of how to use it can be found at the NIST Internet Time website.
I would configure an authoritative time server in windows XP. Here is a step by step process.

Resources