We currently have a payment tracking system which uses MS SQL Server Enterprise. When a client requests a service, he would have to do the payment within 24 hours, otherwise we would send him an SMS Reminder. Our current implementation simply records the date and time of the purchase, and keep on polling constantly the records in order to find "expired" purchases.
This is generating so much load on the database that we have to implement some form of replication in order to offload these operations to another server.
I was thinking: is there a way to combine CLR triggers with some kind of a scheduler that would be triggered only once, that is, 24 hours after the purchase is created?
Please keep in mind that we have tens of thousands of transactions per hour.
I am not sure how you are thinking that SQLCLR will solve this problem. I don't think this needs to be handled in the DB at all.
Since the request time doesn't change, why not load all requests into a memory-based store that you can hit constantly. You would load the 24-hour-from-request time so that you only need to compare those times to Now. If the customer pays prior to the 24-hour period then you remove the entry from the cache. Else, the polling process will eventually find it, process it, and remove it from the cache.
OR, similarly, you can use a scheduler and load a future event to be the SMS message, based on the 24-hour-from-request time, upon each request. Similar to scheduling an action using "AT". Again, if someone pays prior to that time, just remove the scheduled task/event/reminder.
You would store just the 24-hour-after-time and the RequestID. If the time is reached, the service would refer back to the DB using that RequestID to get the current info.
You just need to make sure to de-list items from the cache / scheduler if payment is made prior to the 24-hour-after time.
And if the system crashes / restarts, you just load all entries that are a) unpaid, and b) have not yet reached their 24-hour-after time.
Related
I am using Camel and I have a business problem. We consume order messages from an activemq queue. The first thing we do is check in our DB to see if the customer exists. If the customer doesn't exist then a support team needs to populate the customer in a different system. Sometimes this can take a 10 hours or even the following day.
My question is how to handle this. It seems to me at a high level I can dequeue these messages, store them in our DB and re-run them at intervals (a custom coded solution) or I could note the error in our DB and then return them back to the activemq queue with a long redelivery policy and expiration, say redeliver every 2 hours for 48 hours.
This would save a lot of code but my question is if approach 2 is a sound approach or could lead to resource issues or problems with not knowing where messages are?
This is a pretty common scenario. If you want insight into how the jobs are progressing, then it's best to use a database for this.
Your queue consumption should be really simple: consume the message, check if the customer exists; if so process, otherwise write a record in a TODO table.
Set up a separate route to run on a timer - every X minutes. It should pull out the TODO records, and for each record check if the customer exists; if so process, otherwise update the record with the current timestamp (the last time the record was retried).
This allows you to have a clear view of the state of the system, that you can then integrate into a console to see what the state of the outstanding jobs is.
There are a couple of downsides with your Option 2:
you're relying on the ActiveMQ scheduler, which uses a KahaDB variant sitting alongside your regular store, and may not be compatible with your H/A setup (you need a shared file system)
you can't see the messages themselves without scanning through the queue, which is an antipattern - using a queue as a database - you may as well use a database, especially if you can anticipate needing to ever selectively remove a particular message.
I'm creating an app on GAE with Java, and looking for advice on how to handle scheduling user notifications (which will be email, text, push, whatever). There are a couple ways notifications will generated: when a producer creates content, and on a consumer's schedule. The later is the tricky part, because a consumer can change its schedule at any time. Here are the options I have considered and my concerns so far:
Keep an entry in the datastore for each consumer, indexed by the time until the next notification. My concern is over the lag for an eventually-consistent index. The longest lag I've seen reported is about 4 hours, which would be unacceptable for this use-case. A user should not delay their schedule by a week, then 4 hours later receive a notification from the old schedule.
The same as above, but with each entry sharing a common parent so that I can use an ancestor query to eliminate its eventual-ness. My concern is that there could be enough consumers to cause a problem with contention. In my wildest dreams I could foresee something like 10,000 schedule changes per minute at peak usage.
Schedule a task for each consumer. When changing the schedule, it could delete the old task and create a new one at the new time. My concern has to do with the interaction of tasks and datastore transactions, since the schedule will be stored in the datastore. The documentation notes that enqueing a task plays nicely with transactions, but what about deleting one? I would not want a task to be deleted only to have the add fail as part of its transaction.
Edit: I experimented with deleting tasks (for option 3), and unfortunately a delete that is part of a failed transaction still succeeds. That is a disappointing asymmetry. I will probably end up going that route anyway, but adding some extra logic and datastore flags to ensure rogue tasks that didn't get deleted properly simply do nothing when they execute.
Eventual consistency in the Datastore typically measures in seconds. As Google states:
the time delay is typically small, but may be longer (even minutes or
more in exceptional circumstances).
Save a time of next notification for each user. Run a cron job periodically (e.g. once per hour), and send notifications to all users who have to be notified at this time (i.e. now >= next notification).
Create a task for each user when a user's schedule is created with the countdown value. When a task executes, it creates the next task for this user.
The first approach is probably more efficient, especially if you choose a large enough window for your cron job.
As for transactions, I don't see why you need them. You can design your system that in the very rare fail situation a user will receive two notifications instead of one (old schedule and new schedule). This is not such a bad thing that you need to design around it.
I have an asp.net-mvc website and people manage a list of projects. Based on some algorithm, I can tell if a project is out of date. When a user logs in, i want it to show the number of stale projects (similar to when i see a number of updates in the inbox).
The algorithm to calculate stale projects is kind of slow so if everytime a user logs in, i have to:
Run a query for all project where they are the owner
Run the IsStale() algorithm
Display the count where IsStale = true
My guess is that will be real slow. Also, on everything project write, i would have to recalculate the above to see if changed.
Another idea i had was to create a table and run a job everything minutes to calculate stale projects and store the latest count in this metrics table. Then just query that when users log in. The issue there is I still have to keep that table in sync and if it only recalcs once every minute, if people update projects, it won't change the value until after a minute.
Any idea for a fast, scalable way to support this inbox concept to alert users of number of items to review ??
The first step is always proper requirement analysis. Let's assume I'm a Project Manager. I log in to the system and it displays my only project as on time. A developer comes to my office an tells me there is a delay in his activity. I select the developer's activity and change its duration. The system still displays my project as on time, so I happily leave work.
How do you think I would feel if I receive a phone call at 3:00 AM from the client asking me for an explanation of why the project is no longer on time? Obviously, quite surprised, because the system didn't warn me in any way. Why did that happen? Because I had to wait 30 seconds (why not only 1 second?) for the next run of a scheduled job to update the project status.
That just can't be a solution. A warning must be sent immediately to the user, even if it takes 30 seconds to run the IsStale() process. Show the user a loading... image or anything else, but make sure the user has accurate data.
Now, regarding the implementation, nothing can be done to run away from the previous issue: you will have to run that process when something that affects some due date changes. However, what you can do is not unnecessarily run that process. For example, you mentioned that you could run it whenever the user logs in. What if 2 or more users log in and see the same project and don't change anything? It would be unnecessary to run the process twice.
Whatsmore, if you make sure the process is run when the user updates the project, you won't need to run the process at any other time. In conclusion, this schema has the following advantages and disadvantages compared to the "polling" solution:
Advantages
No scheduled job
No unneeded process runs (this is arguable because you could set a dirty flag on the project and only run it if it is true)
No unneeded queries of the dirty value
The user will always be informed of the current and real state of the project (which is by far, the most important item to address in any solution provided)
Disadvantages
If a user updates a project and then upates it again in a matter of seconds the process would be run twice (in the polling schema the process might not even be run once in that period, depending on the frequency it has been scheduled)
The user who updates the project will have to wait for the process to finish
Changing to how you implement the notification system in a similar way to StackOverflow, that's quite a different question. I guess you have a many-to-many relationship with users and projects. The simplest solution would be adding a single attribute to the relationship between those entities (the middle table):
Cardinalities: A user has many projects. A project has many users
That way when you run the process you should update each user's Has_pending_notifications with the new result. For example, if a user updates a project and it is no longer on time then you should set to true all users Has_pending_notifications field so that they're aware of the situation. Similarly, set it to false when the project is on time (I understand you just want to make sure the notifications are displayed when the project is no longer on time).
Taking StackOverflow's example, when a user reads a notification you should set the flag to false. Make sure you don't use timestamps to guess if a user has read a notification: logging in doesn't mean reading notifications.
Finally, if the notification itself is complex enough, you can move it away from the relationship between users and projects and go for something like this:
Cardinalities: A user has many projects. A project has many users. A user has many notifications. A notifications has one user. A project has many notifications. A notification has one project.
I hope something I've said has made sense, or give you some other better idea :)
You can do as follows:
To each user record add a datetime field sayng the last time the slow computation was done. Call it LastDate.
To each project add a boolean to say if it has to be listed. Call it: Selected
When you run the Slow procedure set you update the Selected fileds
Now when the user logs if LastDate is enough close to now you use the results of the last slow computation and just take all project with Selected true. Otherwise yourun again the slow computation.
The above procedure is optimal, becuase it re-compute the slow procedure ONLY IF ACTUALLY NEEDED, while running a procedure at fixed intervals of time...has the risk of wasting time because maybe the user will neber use the result of a computation.
Make a field "stale".
Run a SQL statement that updates stale=1 with all records where stale=0 AND (that algorithm returns true).
Then run a SQL statement that selects all records where stale=1.
The reason this will work fast is because SQL parsers, like PHP, shouldn't do the second half of the AND statement if the first half returns true, making it a very fast run through the whole list, checking all the records, trying to make them stale IF NOT already stale. If it's already stale, the algorithm won't be executed, saving you time. If it's not, the algorithm will be run to see if it's become stale, and then stale will be set to 1.
The second query then just returns all the stale records where stale=1.
You can do this:
In the database change the timestamp every time a project is accessed by the user.
When the user logs in, pull all their projects. Check the timestamp and compare it with with today's date, if it's older than n-days, add it to the stale list. I don't believe that comparing dates will result in any slow logic.
I think the fundamental questions need to be resolved before you think about databases and code. The primary of these is: "Why is IsStale() slow?"
From comments elsewhere it is clear that the concept that this is slow is non-negotiable. Is this computation out of your hands? Are the results resistant to caching? What level of change triggers the re-computation.
Having written scheduling systems in the past, there are two types of changes: those that can happen within the slack and those that cause cascading schedule changes. Likewise, there are two types of rebuilds: total and local. Total rebuilds are obvious; local rebuilds try to minimize "damage" to other scheduled resources.
Here is the crux of the matter: if you have total rebuild on every update, you could be looking at 30 minute lags from the time of the change to the time that the schedule is stable. (I'm basing this on my experience with an ERP system's rebuild time with a very complex workload).
If the reality of your system is that such tasks take 30 minutes, having a design goal of instant gratification for your users is contrary to the ground truth of the matter. However, you may be able to detect schedule inconsistency far faster than the rebuild. In that case you could show the user "schedule has been overrun, recomputing new end times" or something similar... but I suspect that if you have a lot of schedule changes being entered by different users at the same time the system would degrade into one continuous display of that notice. However, you at least gain the advantage that you could batch changes happening over a period of time for the next rebuild.
It is for this reason that most of the scheduling problems I have seen don't actually do real time re-computations. In the context of the ERP situation there is a schedule master who is responsible for the scheduling of the shop floor and any changes get funneled through them. The "master" schedule was regenerated prior to each shift (shifts were 12 hours, so twice a day) and during the shift delays were worked in via "local" modifications that did not shuffle the master schedule until the next 12 hour block.
In a much simpler situation (software design) the schedule was updated once a day in response to the day's progress reporting. Bad news was delivered during the next morning's scrum, along with the updated schedule.
Making a long story short, I'm thinking that perhaps this is an "unask the question" moment, where the assumption needs to be challenged. If the re-computation is large enough that continuous updates are impractical, then aligning expectations with reality is in order. Either the algorithm needs work (optimizing for local changes), the hardware farm needs expansion or the timing of expectations of "truth" needs to be recalibrated.
A more refined answer would frankly require more details than "just assume an expensive process" because the proper points of attack on that process are impossible to know.
I am designing an application and I have two ideas in mind (below). I have a process that collects data appx. 30 KB and this data will be collected every 5 minutes and needs to be updated on client (web side-- 100 users at any given time). Information collected does not need to be stored for future usage.
Options:
I can get data and insert into database every 5 minutes. And then client call will be made to DB and retrieve data and update UI.
Collect data and put it into Topic or Queue. Now multiple clients (consumers) can go to Queue and obtain data.
I am looking for option 2 as better solution because it is faster (no DB calls) and no redundancy of storage.
Can anyone suggest which would be ideal solution and why ?
I don't really understand the difference. The data has to be temporarily stored somewhere until the next update, right.
But all users can see it, not just the first person to get there, right? So a queue is not really an appropriate data structure from my interpretation of your system.
Whether the data is written to something persistent like a database or something less persistent like part of the web server or application server may be relevant here.
Also, you have tagged this as real-time, but I don't see how the web-clients are getting updates real-time without some kind of push/long-pull or whatever.
Seems to me that you need to use a queue and publisher/subscriber pattern.
This is an article about RabitMQ and Publish/Subscribe pattern.
I can get data and insert into database every 5 minutes. And then client call will be made to DB and retrieve data and update UI.
You can program your application to be event oriented. For ie, raise domain events and publish your message for your subscribers.
When you use a queue, the subscriber will dequeue the message addressed to him and, ofc, obeying the order (FIFO). In addition, there will be a guarantee of delivery, different from a database where the record can be delete, and yet not every 'subscriber' have gotten the message.
The pitfalls of using the database to accomplish this is:
Creation of indexes makes querying faster, but inserts slower;
Will have to control the delivery guarantee for every subscriber;
You'll need TTL (Time to Live) strategy for the records purge (considering delivery guarantee);
I'm building a website where I need to time users' tasks, show them the time as it elapses and keep track of how long it took them to complete the task. The timer should be precise by the second, and an entire task should take about 3-4 hrs top.
I should also prevent the user from forging the completion time (there is no money involved, so it's not really high-risk, but there is some risk).
Currently I use a Timestamp to keep track of when the user began, and at the same time, initialize a JS based timer, when the user finishes I get a notice, and I calculate the difference between current time and the beginning timestamp - this approach is no good, there is a few seconds difference between the user's timer and my time difference (i.e. the time I calculated it took the user to complete the task, note: this was only tested at my dev env., since I don't have any other env. yet..).
Two other approaches I considered are:
1. Relying entirely on client side timer (i.e. JS), and when the user completes the task - sending the time it took him encrypted (this way the user can't forge a start time). This doesn't seem very practical, since I can't figure out a way to generate a secret key at client side which will really be "secret".
2. Relying entirely on server side timer, and sending "ticks" every second. This seem like a lot of server side work comparing to the other two methods(machine, not human.. e.g. accessing the DB for every "tick" to get start time), and I'm also not sure it will be completely accurate.
EDIT:
Here's what's happening now in algorithm wording:
User starts task - server sends user a task id and records start time at db, client side timer is initialized.
User does task, his timer is running...
User ends task, timer is stopped and user's answer and task id are sent to the server.
Server retrieves start time (using received task id) and calculates how long it took user to complete task.
Problem - the time as calculated by server, and the time as displayed at client side are different.
Any insight will be much appreciated.
If I've understood correctly the problem is that the server and client times are slightly different, which they always will be.
So I'd slightly tweak your original sequence as follows:
User starts task - server sends user a task id and records start
time at db, client side timer is initialized.
User client notifies server of client start time; recorded in DB
alongside Server Start Time
User does task, his timer is running...
User ends task, timer is stopped and user's elapsed time, answer
and task id are sent to the server.
Upon receipt the server notes the incoming request time, retrieves
start time calculates how long it took user to complete task for
both server time (start/finish) and client times.
Server ensures that the client value is within an acceptable range
of the server verified time and uses the client time. If the client
time is not within acceptable range (e.g. 30seconds) then use the server times as the
figure.
There will be slight differences in time due to latency, server load, etc. so by using the client values it will be more accurate and just as secure, because these values are sanity checked.
To answer the comment:
You can only have one sort of accuracy, either accurate in terms of what the client/user sees, or accurate in terms of what the server knows. Anything coming from the client side could be tainted, so there has to be a compromise somewhere. You can minimise this by measurement and offsets, such that the end difference is within the same range as the start difference, using the server time, but it will never be 100% unchangeable. If it's really that much of an issue then store times with less accuracy.
If you really must have accuracy and reliability then the only way is to use the server time and periodically grab it via ajax for display and use a local timer to fill in the gaps with a sliding adjustment algorithm between actual and reported times.
I think this will work. Seems like you've got a synchronization issue and also a cryptography issue. My suggestion is to work around the synchronization issue in a way invisible to the user, while still preserving security.
Idea: Compute and display the ticks client side, but use cryptographic techniques to prevent the user from sending a forged time. As long as the user's reported time is close to the server's measured time, just use the user's time. Otherwise, claim forgery.
Client asks server for a task.
Server gets the current timestamp, and encrypts it with its own public key. This is sent back to the client along with the task (which can be plain text).
The client works until they are finished. Ticks are recorded locally in JS.
The client finishes and sends the server back its answer, the number of ticks it recored, and the encrypted timestamp the server first sent it.
The server decrypts the timestamp, and compares it with the current local time to get a number of ticks.
If the server's computed number of ticks is within some tolerance (say, 10 seconds, to be safe), the server accepts the user's reported time. Otherwise, it knows the time was forged.
Because the user's time is accepted (so long as it is within reason), the user never knows that the server time could be out of sync with their reported time. Since the time periods you're tracking are long, loosing a few seconds of accuracy doesn't seem like it will be an issue. The method requires only the encryption of a single timestamp, so it should be fast.
The only way to prevent cheating is not to trust the client at all, but simply to calculate the final time on the server as the time taken from before sending the task to the client to after receiving the result.
This also implies that the final time has to include some network transmission delays, as unfair as that might seem: if you try to compensate for them somehow, the client can always pretend to be suffering from more delays than it actually is.
What you can do, however, is try to ensure that the network delays won't come as a surprise to the user. Below is a simple approach which completely prevents cheating while ensuring, given some mild assumptions about clock rates and network delays, that the running time shown on the client side when the results are submitted should approximately match the final time calculated on the server:
Client starts timer and requests task from server.
Server records current time and sends task to client.
User completes task.
Client sends result to server and (optionally) stops timer.
Server accepts result and subtracts timestamp saved in step 2 from current time to get final time.
Server sends final time back to client.
The trick here is that the client clock is started before the task is requested from the server. Assuming that the one-way network transmission delay between steps 1 and 2 and steps 4 and 5 is approximately the same (and that the client and server clocks run at approximately the same rate, even if they're not in sync), the time from step 1 to 4 calculated at the client should match the time calculated on the server from step 2 to 5.
From a psychological viewpoint, it might even be a good idea to keep the client clock running past step 4 until the final time is received from the server. That way, when the running clock is replaced by the final time, the jump is likely to be backwards, making the user happier than if the time had jumped even slightly forwards.
The best way to prevent the client from faking the timestamp is simply to never let them have access to it. Use a timestamp generated by your server when the user starts. You could store this in the server's RAM, but it would probably be better to write this into the database. Then when the client completes the task, it lets the server know, which then writes the end timestamp into the database as well.
It seems like the important information you're needing here is the difference in start and end times, not the actual start and end times. And if those times are important, then you should definitely be using the a single device's time tracking mechanism, the server's time. Relying upon the client's time prevents them from being comparable to each other due to differences in time zones. Additionally, it's too easy for the end user to fudge their time (accidentally or intentionally).
Bottom Line: There is going to be some inaccuracy here. You must compromise when you need to satisfy so many requirements. Hopefully this solution will give you the best results.
Clock synchronization
This what you are looking for, WikiPedia explanation.
And here is the solution for JavaScript.