Implementing lock and release on firebase data - angularjs

I want to use Firebase to build an app for ticket purchases. I envision storing the inventory of tickets on the servers and requirements to be:
Allow user to reserve ticket while processing the payment (ie lock
the ticket)
Release the ticket after a certain amount of time if not purchased
Prevent dual purchase of same inventory item
I'm concerned about how this would be possible without server-side code where the individual clients are controlling the locks and releases. I suppose the client can keep track of how long its been since the ticket was reserved and then release it. But what if the client disconnects? Would I successfully be able to release locks on tickets using .onDisconnect() for example when user loses connectivity?

Yes, you can. Add an .onDisconnect() which deletes the lock when the user loses connectivity.
That is exactly what .onDisconnect() does, the action is triggered on the server-side when the client connection stops.
But you might want to consider what happens if the client loses the connection temporarily, e.g. if their train goes through a tunnel

Related

Very long camel redelivery policy

I am using Camel and I have a business problem. We consume order messages from an activemq queue. The first thing we do is check in our DB to see if the customer exists. If the customer doesn't exist then a support team needs to populate the customer in a different system. Sometimes this can take a 10 hours or even the following day.
My question is how to handle this. It seems to me at a high level I can dequeue these messages, store them in our DB and re-run them at intervals (a custom coded solution) or I could note the error in our DB and then return them back to the activemq queue with a long redelivery policy and expiration, say redeliver every 2 hours for 48 hours.
This would save a lot of code but my question is if approach 2 is a sound approach or could lead to resource issues or problems with not knowing where messages are?
This is a pretty common scenario. If you want insight into how the jobs are progressing, then it's best to use a database for this.
Your queue consumption should be really simple: consume the message, check if the customer exists; if so process, otherwise write a record in a TODO table.
Set up a separate route to run on a timer - every X minutes. It should pull out the TODO records, and for each record check if the customer exists; if so process, otherwise update the record with the current timestamp (the last time the record was retried).
This allows you to have a clear view of the state of the system, that you can then integrate into a console to see what the state of the outstanding jobs is.
There are a couple of downsides with your Option 2:
you're relying on the ActiveMQ scheduler, which uses a KahaDB variant sitting alongside your regular store, and may not be compatible with your H/A setup (you need a shared file system)
you can't see the messages themselves without scanning through the queue, which is an antipattern - using a queue as a database - you may as well use a database, especially if you can anticipate needing to ever selectively remove a particular message.

GAE Datastore / Task queues – Saving only one item per user at a time

I've developed a python app that registers information from incoming emails and saves this information to the GAE Datastore. Registering the emails works just fine. As part of the registration, emails with the same subject and recipients get a conversation ID. However, sometimes emails enter the system so fast after each other, that emails from the same conversation don't get the same ID. This happens because two emails from the same conversation are being processed at the same time and GAE doesn't see the other entry yet when running a query for this conversation.
I've been thinking of a way to prevent this, and think it would be best if the system processes only one email per user at a time (each sender has his own account). This could be done by having a push task queue that first checks if there is currently an email being processed for this user, and if so, put the new task in a pull queue from which it can be retrieved as soon as the previous task has been finished.
The big disadvantage of this, is that (I think) I can't run the push queue asynchronous, which obviously is a big performance disadvantage. Any ideas on what would be a better way to setup such a process?
Apparently this was a typical race-condition. I've made use of the Transactions functionality to prevent multiple processes writing at the same time. Documentation can be found here: https://cloud.google.com/appengine/docs/python/datastore/transactions

Ticket reservation system built entirely on Cassandra

Would it be possible to build a Ticketmaster style ticket reservation system by storing all information in a Cassandra cluster?
The system needs to be able to
1. Display the correct number of tickets available at one time
2. Temporarily reserve a ticket while the customer is making the purchase
3. No two users can ever buy the same ticket.
For consistency all reads and writes should be made at quorum. I'm not sure how to implement steps 2 or 3?
Yes, you can.
However, there will be some transactions where you want strict consistency. For example, consistency does not matter when the user is browsing the site and adding tickets to their shopping cart, but when they checkout and select a specific seat number on a specific day consistency matters a great deal (double bookings being a bad thing, especially for high interest events).
So, you could implement 99% of the functionality in an eventually consistent database and implement the checkout process in a consistent database. This is also nice because you can scale 99% of your system that likely gets >70% of the load horizontally and across multiple data centers. Just keep in mind that you will have to deal with the scenario of your site being up but your checkout process being down (ex., an error dialog at checkout asking them to wait/retry and giving them a promo code for their troubles).
The last detail is that you will need to update your eventually consistent database's "number of available tickets" after someone checks out. The good news is that this can be done lazily - queue up that job and do it whenever your system has some spare cycles. It certainly never has to happen in the critical path of the user's checkout process.

Syncing database and an external payment service

Are there any "design patterns" related to processing important financial operations so that there's no way that a local database can become out of sync because of some errors ?
Example:
A financial transaction record is created in a local db, then a request is sent to a remote payment API endpoint to charge a customer. Pseudocode:
record = TransactionRecord.create(timestamp=DateTime.now, amount=billed_amount, status=Processing)
response = Request.post(url=remote_url, data=record.post_data)
if response.ok:
record.mark_as_ok()
else:
record.mark_failed()
Now, even if I handle errors that can be returned by the remote payment service a lot of other bad things can still happen: DB server can go down, network connection can go down etc., at arbitrary points in time.
In the above code the DB server can become inaccessible right after creating the transaction record, so it might not be possible to mark that record as ok, even if the financial transaction itself has been performed successfuly by the remote service.
In other words: customer is charged but we don't have that booked..
This can be worked around in a number of ways - by periodically syncing with the remote service, by investigating TransactionReturn-s which are being processed but are older than e.g. 10 minutes or an hour.
But my question is if there are some well established patterns for handling such situations (where money is involved, so everything should work properly "all the time") ?
PS. I'm not sure what tags should I use for this question, feel free to re-tag it.
I don't think there is any 'design pattern' to address cases such as database connection going down or network connection going down as it happens in your scenario. Any of those two scenarios are major fault events and would most likely require manual intervention.
There is not much coding you can do to address them other than being defensive by doing proper error checking, providing proper notifications to support and automatically disabling functionality which does not work (if the application detects that the payment service is down then 'Submit payment' button should be disabled).
You will be able to cut down significantly on support if you do proper error handling and state management. In your case, the transaction record would have to change its state from Pending -> Submitted -> Processed or Rejected or something like this.
Also, not every service provides functionality to for syncing up.

How to provide global functionality in multi-user database app

I have been building a multi-user database application (in C#/WPF 4.0) that manages tasks for all employees of a company. I now need to add some functionality such as sending an email reminder to someone when a critical task is due. How should this be done? Obviously I don’t want every instance of the program to be performing this function (Heh each user would get 10+ emails).
Should I add the capability to the application as a "Mode" and then run a copy on the database server in this mode or would it be better to create a new app altogether to perform "Global" type tasks? Is there a better way?
You could create a windows service/wcf service that would poll the database at regular intervals for any pending tasks and send mails accordingly.
Some flag would be needed to indicate whether email is send or not for a particular task.

Resources