How can I prevent database being written to again when the browser does a reload/back? - database

I'm putting together a small web app that writes to a database (Perl CGI & MySQL). The CGI script takes some info from a form and writes it to a database. I notice, however, that if I hit 'Reload' or 'Back' on the web browser, it'll write the data to the database again. I don't want this.
What is the best way to protect against the data being re-written in this case?

Do not use GET requests to make modifications! Be RESTful; use POST (or PUT) instead the browser should warn the user not to reload the request. Redirecting (using HTTP redirection) to a receipt page using a normal GET request after a POST/PUT request will make it possible to refresh the page without getting warned about resubmitting.
EDIT:
I assume the user is logged in somehow, and therefore you allready have some way of tracking the user, e.g. session or similar.
You could make a timestamp (or a random hash etc..) when displaying the form storing it both as a hidden field (just besides the anti Cross-Site Request token I'm sure you allready have there), and in a session variable (wich is stored safely on your server), when you recieve a the POST/PUT request for this form, you check that the timestamp is the same as the one in session. If it is, you set the timestamp in the session to something variable and hard to guess (timestamp concatenated with some secret string for instance) then you can save the form data. If someone repeats the request now you won't find the same value in the session variable and deny the request.
The problem with doing this is that the form is invalid if the user clicks back to change something, and it might be a bit to harsh, unless it's money you're updating. So if you have problems with "stupid" users who refresh and click the back-button thus accidentally reposting something, just using POST would remind them not to do that, and redirecting will make it less likely. If you have a problem with malicious users, you should use a timestampt too allthough it will confuse users sometimes, allthough if users is deliberately posting the same message over and over you probably need to find a way to ban them. Using POST, having a timestam, and even doing a full comparison of the whole database to check for duplicate posts, won't help at all if the malicious users just write a script to load the form and submit random garbage, automatically. (But cross-site-request protection makes that a lot harder)

Using a POST request will cause the browser to try to prevent the user from submitting the same request again, but I'd recommend using session-based transaction tracking of some kind so that if the user ignores the warnings from the browser and resubmits his query your application will prevent duplication of changes to the database. You could include a hidden input in the submission form with value set to a crypto hash and record that hash if the request is submitted and processed without error.

I find it handy to track the number of form submissions the user has performed in their session. Then when rendering the form I create a hidden field that contains that number. If the user then resubmits the form by pressing the back button it'll submit the old # and the server can tell that the user has already submitted the form by examining what's in the session to what the form is saying.
Just my 2 cents.

If you aren't already using some sort of session-management (which would let you note and track form submissions), a simple solution would be to include some sort of unique identifier in the form (as a hidden element) that is either part of the main DB transaction itself, or tracked in a separate DB table. Then, when you are submitted a form you check the unique ID to see if it has already been processed. And each time the form itself is rendered, you just have to make sure you have a unique ID.

First of all, you can't trust the browser, so any talk about using POST rather than GET is mostly nerd flim-flam. Yes, the client might get a warning along the lines of "Did you mean to resubmit this data again?", but they're quite possibly going to say "Yes, now leave me alone, stupid computer".
And rightly so: if you don't want duplicate submissions, then it's your problem to solve, not the user's.
You presumably have some idea what it means to be a duplicate submission. Maybe it's the same IP within a few seconds, maybe it's the same title of a blog post or a URL that has been submitted recently. Maybe it's a combination of values - e.g. IP address, email address and subject heading of a contact form submission. Either way, if you've manually spotted some duplicates in your data, you should be able to find a way of programmatically identifying a duplicate at the time of submission, and either flagging it for manual approval (if you're not certain), or just telling the submitter "Have you double-clicked?" (If the information isn't amazingly confidential, you could present the existing record you have for them and say "Is this what you meant to send us? If so, you've already done it - hooray")

I'd not rely on POST warnings from the browser. Users just click OK to make messages go away.
Anytime you'll have a request that needs to be one time only e.g 'make a payment', send a unique token down, that gets submitted back with the request. Throw the token out after it comes back, and so you can now tell when something is a valid submission (anything with a token that isn't 'active'). Expire active tokens after X amount of time, e.g. when a user session ends.
(alternately track the tokens that have come back, and if you have received it before then it is invalid.)

Do a POST every time you alter data, but never return an HTML response from a post... instead return a redirect to a GET that retrieves the updated data as a confirmation page. That way, there is no worry about them refreshing the page. If they refresh, all that will happen is another retrieve, never a data-altering action.

Related

What is the idiomatic way of handling "ephemeral" state in a database?

I know that "best practices" type of questions are frowned upon in the StackOverflow community, but I am not sure how else to word this.
My "big picture" question is this:
What is a good practice when it comes to handling "session" state in a stateless server (like one that provides a REST api)?
Quick details
Using nodeJS on backend, MongoDB for database.
Example 1: Login state
In version 1 of the admin panel, I had a simple login that asks for an email and password. If the credentials are correct, user is returned a token, otherwise an error.
In version 2, I added a two-factor authentication for users who activate it.
Deciding to keep things simple, I have now two endpoints. The flow is this:
/admin/verifyPassword:
Receive email and password;
if(Credentials are correct) {
if(Admin requires 2fa) {
return {nextStep: 2fa};
} else {
return tokenCode;
}
} else {
return error;
}
/admin/verifyTotpToken:
Receive email and TOTP token;
Get admin with corresponding email
if(Admin has verified password) {
return tokenCode
} else {
return error;
}
At the verifyTotpToken step, it needs to know if the admin has already verified password. To do that I decided to attach a 'temporary' field to the Admin document called hasVerifiedPassword which gets set to true in verifyPassword step.
Not only that, but I also set a passwordVerificationExpirationDate temporary field in the verifyPassword endpoint so that they have a short window within which they must complete the whole login process.
The problem with my approach is that:
It bloats the admin document with ephemeral, temporary state that has nothing to do with an admin itself. In my mind, resource and session are two separate things.
It gives way for stale data to stay alive and attached to the admin document, which at best is a slight nuisance when looking through the admin collection in a database explorer, and at worst can lead to hard to detect bugs because the garbage data is not properly cleaned.
Example 2: 2FA activation confirmation by email
When an admin decides to activate 2fa, for security purposes, I first send them an email to confirm that it is truly them (and not someone who hijacked their session) who wanted to activate 2fa. To do that I need to pass in a hash of someway and store it in the database.
My current approach is this:
1) I generate a hash on the server side, store it in their admin document as well as an expiration date.
2) I generate a url containing the hash as a query parameter and send it in the email.
3) The admin clicks on the email
4) The frontend code picks up the hash from the query parameter and asks the server to verify it
5) The server looks up the admin document and checks for a hash match. If it does, great. Return ok and clean up the data. If not, return an error. If expired, clean up the data.
Here also, I had to use some temporary state (the two fields hash and expirationDate). It is also fragile for the same problems mentioned above.
My main point
Through these two examples I tried to illustrate the problem I am facing. Although these solutions are working "fine", I am curious about what better programmers think of my approaches and if there is a better, more idiomatic way of doing this.
Please keep in mind that the purpose of my question is not a get a specific solution to my specific problem. I am looking for advice for the more general problem of storing session data in a clever, maintainable, way that does not mix resource state and ephemeral state.

REST optimistic-locking and multiple PUTs

Far as I understand, PUT request is not supposed to return any content.
Consider the client wants to run this pseudo code:
x = resource.get({id: 1});
x.field1 = "some update";
resource.put(x);
x.field2 = "another update";
resource.put(x);
(Imagine I have an input control and a button "Save", this allows me to change a part of object "x" shown in an input control, then on button click PUT changes to server, then continue editing and maybe "save" another change to "x")
Following different proposals on how to implement optimistic locking in REST APIs, the above code MUST fail, because version mark (however implemented) for "x" as returned by get() will become stale after put().
Then how do you people usually make it work?
Or do you just re-GET objects after every PUT?
You can use "conditional" actions with HTTP, for example the If-Match header described here:
https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.24
In short: You deliver an ETag with the GET request, and supply this ETag back to the server in the If-Match header. The server will respond with a failure if the resource you are trying to PUT has another ETag. You can also use simple timestamps with the If-Unmodified-Since header.
Of course you will have to make your server code understand conditional requests.
For multiple steps, the PUT can indeed return the new representation, it can therefore include the new ETag or timestamp too. Even if the server does not return the new representation for a PUT, you could still use the timestamp from the response with an If-Unmodified-Since conditional PUT.
Here is probably what I was looking for: https://www.rfc-editor.org/rfc/rfc7231#section-4.3.4
They implicitly say that we CAN return ETag from PUT. Though only in the case server applied the changes as they were given, without any corrections.
However this raises yet another question. In real world app PUT caller will run asynchronously in JS gui, like in my example in the question. So, Save button might be pressed several times with or without entering any changes. If we don't use optimistic locking, then supposed PUT idempotency makes it safe to send another PUT query with each button click, as long as the last one wins (but actually if there were changes then it's not guaranteed, so the question remains).
But with optimistic locking, when first PUT succeeds, it returns updatred ETag, ok? And if there is another PUT request running, still with outdated tag version, that latter request will get 412 and the user will see a message "someone else changed the resource" - but actually it was our former changes.
What do you usually do to prevent that? Disable the Save button until its request is fully completed? What if it times out? Or do you think it's acceptable to see concurrent-change error message if it was a timeout, because the stability is already compromised anyway?

Swipe-delete messages only for current user

thanks for taking time looking at my question.
Ok so I'm working on this iPhone app. I'm responsible for the server side code. Client side is asking for a solution to delete private messages from the app. I have created a HTTP DELETE for them that deletes a specific message. But this request deletes the message from the database and that makes the message disappear for both users and not only the one that have choosen to delete it.
I've been thinking but I can't seem to find the best solution for this. What I need is a solution to only delete the message for the current user.
Should I add some columns in the database that tells which user the private message should be shown for? And when a user deletes the message from the app it only stops showing on that users phone. Or is there a better solution for this?
I need help with some brainstorming. I hope it is an OK question.
Thanks!
A physical delete should probably be avoided. The first couple of reasons I can think of:
how can you do proper testing/audit if the information you're looking for is gone?
legal issues: do you need some levels of data retention?
You can implement some form of logical delete, for example with an extra relation such as UserMessage( UserID, MessageID, MessageStatus ), where MessageStatus could be "unread", "read", "deleted", "important", "spam", etc. (you can map the status to an arbitrary integer if you prefer). When a user deletes a message, you simply change its status in the UserMessage relation, and from the UI side you hide messages which are marked as "deleted".

Backbone/Laravel: How to post to a different table

Here's my dilemma. I have a series of tables with leads from our various contact forms. We also have tables for spam that comes in on our forms. But occasionally, a lead may get routed incorrectly to a spam table. So I need to move that lead from one table to the other, which means inserting it one and deleting it from the other.
As I understand it, when Backbone calls the save method, it checks to see if that id exists in the table. If it doesn't, it makes a POST request. If it does, it makes a PUT request. I need to be able to force Backbone to make a POST request, so that Laravel can call the right RESTful action.
See the problem is that if Backbone makes a PUT request to say, /send-message/52 (the 52 being the ID of the lead in the send-message-spam table), it will update/overwrite the existing lead with the ID of 52. I want to make a POST request to /send-message (obviously without an ID).
I can force Backbone to use a different urlRoot, but how do I force it to make a POST when I call save()?
I'm unsure if there is a configuration option to do this, but you could always override the save method in your Backbone model to have it perform the action you desire.

Providing notifications to the current user from an Apex trigger in Salesforce

I'm trying to figure out a user friendly way to pass messages to users from my Apex code. I have a trigger which fires after insert/update of a lead, which then filters the list of updates and triggers a #future method which pushes the lead data out to an external web service and updates the converted account with some of the returned values.
I'd like to do the following (where X, Y and Z are any number of leads from 1 to 50)
notify the user converting the leads that leads X, Y and Z will be exported (I'll know this during the trigger execution).
notify the user whether the export succeeded or failed (which will be known for each of X, Y and Z when the #future method runs).
What is the recommended way to pass this information back to the user? I'd prefer not to use email (as this would trigger one email per record, which is pretty spammy and unpleasant). Is there another way to inject notification messages into a page? I've tried ApexPages.addMessage() but it doesn't seem to do anything for me (no error, but no notice either).
addMessage() works with both Visualforce pages and standard pages when there's a current page active, so using this in the trigger should work fine if the user is firing the action from a button / VF page. Using this won't work from your #future method however because it runs asynchronously in the background.
Maybe the best solution would be to use a custom message object, which has a list of fields modified, when, and has a lookup to the appropriate user (or uses them as the owner). You could then create a simple VF page and controller which when viewed queries for records in that object related to the current user and provides an option to delete them (you could automatically delete them after pulling them from the DB but you run the risk of the user not actually noticing a message). You can then take this page and use it as part of a dashboard component, so anytime the user is viewing their Home page they could see a list of notifications.
Finally another option might be making use of Chatter, pumping the messages to the user via that which will then also show up in digest emails etc..

Resources