I'm building a website where I need to time users' tasks, show them the time as it elapses and keep track of how long it took them to complete the task. The timer should be precise by the second, and an entire task should take about 3-4 hrs top.
I should also prevent the user from forging the completion time (there is no money involved, so it's not really high-risk, but there is some risk).
Currently I use a Timestamp to keep track of when the user began, and at the same time, initialize a JS based timer, when the user finishes I get a notice, and I calculate the difference between current time and the beginning timestamp - this approach is no good, there is a few seconds difference between the user's timer and my time difference (i.e. the time I calculated it took the user to complete the task, note: this was only tested at my dev env., since I don't have any other env. yet..).
Two other approaches I considered are:
1. Relying entirely on client side timer (i.e. JS), and when the user completes the task - sending the time it took him encrypted (this way the user can't forge a start time). This doesn't seem very practical, since I can't figure out a way to generate a secret key at client side which will really be "secret".
2. Relying entirely on server side timer, and sending "ticks" every second. This seem like a lot of server side work comparing to the other two methods(machine, not human.. e.g. accessing the DB for every "tick" to get start time), and I'm also not sure it will be completely accurate.
EDIT:
Here's what's happening now in algorithm wording:
User starts task - server sends user a task id and records start time at db, client side timer is initialized.
User does task, his timer is running...
User ends task, timer is stopped and user's answer and task id are sent to the server.
Server retrieves start time (using received task id) and calculates how long it took user to complete task.
Problem - the time as calculated by server, and the time as displayed at client side are different.
Any insight will be much appreciated.
If I've understood correctly the problem is that the server and client times are slightly different, which they always will be.
So I'd slightly tweak your original sequence as follows:
User starts task - server sends user a task id and records start
time at db, client side timer is initialized.
User client notifies server of client start time; recorded in DB
alongside Server Start Time
User does task, his timer is running...
User ends task, timer is stopped and user's elapsed time, answer
and task id are sent to the server.
Upon receipt the server notes the incoming request time, retrieves
start time calculates how long it took user to complete task for
both server time (start/finish) and client times.
Server ensures that the client value is within an acceptable range
of the server verified time and uses the client time. If the client
time is not within acceptable range (e.g. 30seconds) then use the server times as the
figure.
There will be slight differences in time due to latency, server load, etc. so by using the client values it will be more accurate and just as secure, because these values are sanity checked.
To answer the comment:
You can only have one sort of accuracy, either accurate in terms of what the client/user sees, or accurate in terms of what the server knows. Anything coming from the client side could be tainted, so there has to be a compromise somewhere. You can minimise this by measurement and offsets, such that the end difference is within the same range as the start difference, using the server time, but it will never be 100% unchangeable. If it's really that much of an issue then store times with less accuracy.
If you really must have accuracy and reliability then the only way is to use the server time and periodically grab it via ajax for display and use a local timer to fill in the gaps with a sliding adjustment algorithm between actual and reported times.
I think this will work. Seems like you've got a synchronization issue and also a cryptography issue. My suggestion is to work around the synchronization issue in a way invisible to the user, while still preserving security.
Idea: Compute and display the ticks client side, but use cryptographic techniques to prevent the user from sending a forged time. As long as the user's reported time is close to the server's measured time, just use the user's time. Otherwise, claim forgery.
Client asks server for a task.
Server gets the current timestamp, and encrypts it with its own public key. This is sent back to the client along with the task (which can be plain text).
The client works until they are finished. Ticks are recorded locally in JS.
The client finishes and sends the server back its answer, the number of ticks it recored, and the encrypted timestamp the server first sent it.
The server decrypts the timestamp, and compares it with the current local time to get a number of ticks.
If the server's computed number of ticks is within some tolerance (say, 10 seconds, to be safe), the server accepts the user's reported time. Otherwise, it knows the time was forged.
Because the user's time is accepted (so long as it is within reason), the user never knows that the server time could be out of sync with their reported time. Since the time periods you're tracking are long, loosing a few seconds of accuracy doesn't seem like it will be an issue. The method requires only the encryption of a single timestamp, so it should be fast.
The only way to prevent cheating is not to trust the client at all, but simply to calculate the final time on the server as the time taken from before sending the task to the client to after receiving the result.
This also implies that the final time has to include some network transmission delays, as unfair as that might seem: if you try to compensate for them somehow, the client can always pretend to be suffering from more delays than it actually is.
What you can do, however, is try to ensure that the network delays won't come as a surprise to the user. Below is a simple approach which completely prevents cheating while ensuring, given some mild assumptions about clock rates and network delays, that the running time shown on the client side when the results are submitted should approximately match the final time calculated on the server:
Client starts timer and requests task from server.
Server records current time and sends task to client.
User completes task.
Client sends result to server and (optionally) stops timer.
Server accepts result and subtracts timestamp saved in step 2 from current time to get final time.
Server sends final time back to client.
The trick here is that the client clock is started before the task is requested from the server. Assuming that the one-way network transmission delay between steps 1 and 2 and steps 4 and 5 is approximately the same (and that the client and server clocks run at approximately the same rate, even if they're not in sync), the time from step 1 to 4 calculated at the client should match the time calculated on the server from step 2 to 5.
From a psychological viewpoint, it might even be a good idea to keep the client clock running past step 4 until the final time is received from the server. That way, when the running clock is replaced by the final time, the jump is likely to be backwards, making the user happier than if the time had jumped even slightly forwards.
The best way to prevent the client from faking the timestamp is simply to never let them have access to it. Use a timestamp generated by your server when the user starts. You could store this in the server's RAM, but it would probably be better to write this into the database. Then when the client completes the task, it lets the server know, which then writes the end timestamp into the database as well.
It seems like the important information you're needing here is the difference in start and end times, not the actual start and end times. And if those times are important, then you should definitely be using the a single device's time tracking mechanism, the server's time. Relying upon the client's time prevents them from being comparable to each other due to differences in time zones. Additionally, it's too easy for the end user to fudge their time (accidentally or intentionally).
Bottom Line: There is going to be some inaccuracy here. You must compromise when you need to satisfy so many requirements. Hopefully this solution will give you the best results.
Clock synchronization
This what you are looking for, WikiPedia explanation.
And here is the solution for JavaScript.
Related
I have got a bug case from the service desk, which was a result of different system times on the application server (JBoss) and DB(Oracle) server. As a result, timeouts lied.
It doesn't happen often, but for the future, it will be better if the app server could raise alarm about the bad time on the DB server before it results in some deeper problems.
Of course, I can simply read
Select CURRENT_TIMESTAMP
, and compare it against the local time. But it is probable that the time of sending the query and getting its result will get some noticeable time and I will recognize good time as bad one or vice versa.
I can also check the time from sending the query to the return of the result. But this way will work correctly in the case of the good net without lags. And if the time on the DB server fails, it is highly probable that the net around the DB server is not OK. The queues on the DB server can make the times of sending and receiving noticeably unequal.
What is the best way you know to check the time on the DB server?
Limitations: preciseness of 5 sec
false alarms <10%
To be optimized(minimized): lost alarms.
Maybe I am inventing the bicycle and JBoss and/or Oracle have some tool for that? (I could not find it)
Have a program running on the app server get the current time there, then query the database time (CURRENT_TIMESTAMP) and the app server gets the current time there after the query returns.
Confirm that the DB time is between the two times on the App Server (with any tolerance you need). You can include a separate check on how long it took to get the response from the DB but it should be trivial.
If the environment is some form of VM, issues are most likely to arise when the VM is started or resumed from a pause. There might be situations where a clock is running fast or slow so recording the times would allow you to look for trends in either direction and allow you to take preemptive action.
We currently have a payment tracking system which uses MS SQL Server Enterprise. When a client requests a service, he would have to do the payment within 24 hours, otherwise we would send him an SMS Reminder. Our current implementation simply records the date and time of the purchase, and keep on polling constantly the records in order to find "expired" purchases.
This is generating so much load on the database that we have to implement some form of replication in order to offload these operations to another server.
I was thinking: is there a way to combine CLR triggers with some kind of a scheduler that would be triggered only once, that is, 24 hours after the purchase is created?
Please keep in mind that we have tens of thousands of transactions per hour.
I am not sure how you are thinking that SQLCLR will solve this problem. I don't think this needs to be handled in the DB at all.
Since the request time doesn't change, why not load all requests into a memory-based store that you can hit constantly. You would load the 24-hour-from-request time so that you only need to compare those times to Now. If the customer pays prior to the 24-hour period then you remove the entry from the cache. Else, the polling process will eventually find it, process it, and remove it from the cache.
OR, similarly, you can use a scheduler and load a future event to be the SMS message, based on the 24-hour-from-request time, upon each request. Similar to scheduling an action using "AT". Again, if someone pays prior to that time, just remove the scheduled task/event/reminder.
You would store just the 24-hour-after-time and the RequestID. If the time is reached, the service would refer back to the DB using that RequestID to get the current info.
You just need to make sure to de-list items from the cache / scheduler if payment is made prior to the 24-hour-after time.
And if the system crashes / restarts, you just load all entries that are a) unpaid, and b) have not yet reached their 24-hour-after time.
Whenever a user clicks on GetReport button, there is a request to the server where SQL is formed in the back end and connection is established with Database. When the function ExecuteReader() is executed, it returns data at different time responses.
There are 12 servers in Production environment and the settings is such that when there is no response for more than 60 seconds from the back end, the connection is removed and hence blank screen appears on "UI".
In my code the SQL is formed and connection is established and when ExecuteReader()function is executed, it is returning data after the interval of 60 seconds where as per settings in the server, the connection is removed and hence leading to appearance of blank screen.
If the ExecuteReader() function returns data within 60 seconds, then the functionality works fine. The problem is only when the ExecuteReader() function does not retrieve data within 60 seconds.
Problem is that ExecuteReader() function returns data within 2 seconds sometimes for the same SQl and sometimes takes 2 minutes to retrieve data.
Please suggest why there is variation in returning data at different time intervals for the same query and how should I be proceeding in this situation as we are not able to increase the response time in production because of security issues.
Code is in vb.net
You said it yourself:
how should I be proceeding in this situation as we are not able to increase the response time in production because of security issues.
There's nothing you can do
If, however, you do suddenly gain the permissions to modify the query that is being run, or reconfigure the resource provision of the production system, post back here with a screenshot of the execution plan and we can tell you any potential performance bottlenecks.
Dan's comment pretty much covers why a database query might be slow; usually a similar reason why YouTube is slower to buffer at 7pm - the parents got home from work at 6, the kids screamed at them for an hour ago wanting to go on YouTube while parent desperately tries to engage child in something more educational or physically active, parent finally gives in and wants some peace and quiet :) - resource provision/supply and demand in the entire chain between you and YouTube
I have created a vb.net application that uses a SQL Server database at a remote location over the internet.
There are 10 vb.net clients that are working on the same time.
The problem is in the delay time that happens when inserting a new row or retrieving rows from the database, the form appears to be freezing for a while when it deals with the database, I don't want to use a background worker to overcome the freeze problem.
I want to eliminate that delay time and decrease it as much as possible
Any tips, advises or information are welcomed, thanks in advance
Well, 2 problems:
The form appears to be freezing for a while when it deals with the database, I don't want to use a background worker
to overcome the freeze problem.
Vanity, arroaance and reality rarely mix. ANY operation that takes more than a SHORT time (0.1-0.5 seconds) SHOULD run async, only way to kep the UI responsive. Regardless what the issue is, if that CAN take longer of is on an internet app, decouple them.
But:
The problem is in the delay time that happens when inserting a new records or retrieving records from the database,
So, what IS The problem? Seriously. Is this a latency problem (too many round trips, work on more efficient sql, batch, so not send 20 q1uestions waiting for a result after each) or is the server overlaoded - it is not clear from the question whether this really is a latency issue.
At the end:
I want to eliminate that delay time
Pray to whatever god you believe in to change the rules of physics (mostly the speed of light) or to your local physician tof finally get quantum teleportation workable for a low cost. Packets take time at the moment to travel, no way to change that.
Check whether you use too many ound trips. NEVER (!) use sql server remotely with SQL - put in a web service and make it fitting the application, possibly even down to a 1:1 match to your screens, so you can ask for data and send updates in ONE round trip, not a dozen. WHen we did something similar 12 years ago with our custom ORM in .NET we used a data access layer for that that acepted multiple queries in one run and retuend multiple result sets for them - so a form with 10 drop downs could ask for all 10 data sets in ONE round trip. If a request takes 0.1 seconds internet time - then this saves 0.9 seconds. We had a form with about 100 (!) round trips (creating a tree) and got that down to less than 5 - talk of "takes time" to "whow, there". Plus it WAS async, sorry.
Then realize moving a lot of data is SLOW unless you have instant high bandwidth connections.
THis is exaclty what async is done for - if you have transfer time or latency time issues that can not be optimized, and do not want to use async, go on delivering a crappy experience.
You can execute the SQL call asynchronously and let Microsoft deal with the background process.
http://msdn.microsoft.com/en-us/library/7szdt0kc.aspx
Please note, this does not decrease the response time from the SQL server, for that you'll have to try to improve your network speed or increase the performance of your SQL statements.
There are a few things you could potentially do to speed things up, however it is difficult to say without seeing the code.
If you are using generic inserts - start using stored procedures
If you are closing the connection after every command then... well dont. Establishing a connection is typically one of the more 'expensive' operations
Increase the pipe between the two.
Add an index
Investigate your SQL Server perhaps it not setup in a preferred manner.
Assuming I have one database keeping a simple history with multiple front ends talking to it (one front end per server), I wonder what are the common solutions to deal with time. As soon as I have multiple servers, I cannot assume a global consistent clock, and I was interested in the possible solutions to maintain some kind of ordering between requests.
For a concrete example, let's say I want to record histories of customers, where history is defined as time ordered set of records. The record table would be as simple as (customer_id, time, data), and history would be all the rows where customer_id == requested id. Each request sent by the user would contain one record sent to one customer. Ideally, the time should refer to the "actual" time the request was sent to the front end by the customer (as that's the time as seen from the user POV). To be exact, I only care about preserving the ordering between records for each customer, not about the absolute time.
I am aware of solutions such as vector clocks, etc... but that seems rather complex, and I would expect this to be a rather common issue ?
Solutions which are not acceptable in my case:
Changing the requests arriving at the front end: I unfortunately have to work under the constraint that the requests are passed as is. I have complete control of whatever communication protocol is needed between front ends and database, though.
Server time clocks are synchronized
All request which require being ordered to each other are handled by the same front end server
[EDIT]: the question may sound a bit like red-herring, so here is my rationale for asking it: while this is not my issue right now, I am interested in the possibility to go to a platform like Google App Engine, which explicitly says that their servers are not guaranteed to be time synchronized. The solution to that issue for request ordering does not sound obvious to me - but maybe something like vector clock is actually the only "good" solution ?
When you perform any action that records history data to the database you could record two sets of datetime info:
the datetime as set by the DB when the record was inserted
the datetime passed through with the data as a legitimate piece of metadata.
The former would give you a central view of the world if you ever needed it, and the latter would let you reconstruct datetime from customers perspective.
If you were ultra-keen you could also pass through the datetime from the users browser by filling some sort of parameter/field using JavaScript.
As soon as I have multiple servers, I
cannot assume a global consistent
clock
Well, you can configure servers to sync their clocks to a time server. You could also configure your database server to sync to a time server, and configure the other servers to sync to your database server as often as you need to. (I'm not saying that's a great idea, just saying it's possible. If you have access to all the servers.)
Anyway . . . so the front ends are the only pieces of software you have that actually know when a request arrives. Is that right?
If that's right, then it's the front ends job to record the time of the customer's request, possibly in UTC, and then forward that timestamp to the database.
If you can't synchronize the server's clocks, then I think your only hope is to have every front ends ask just one specific server--maybe your database server, but maybe not--what time it is when a customer request arrives. A front end can do that by asking for daytime on port 13 (DAYTIME protocol, RFC-867), asking for time on port 37 (TIME protocol, RFC-868), or asking a time server on port 123 (either NTP or SNTP protocol, RFC-1305 and RFC-2030).
But after reading your edit, I think what you want is impossible. You seem to be saying that
what the front ends send doesn't
contain enough information to
reconstruct the "true" ordering
what the front ends send cannot be
changed
If the front ends can't send you any other information, vector clocks and interval tree clocks won't help.