REST optimistic-locking and multiple PUTs - angularjs

Far as I understand, PUT request is not supposed to return any content.
Consider the client wants to run this pseudo code:
x = resource.get({id: 1});
x.field1 = "some update";
resource.put(x);
x.field2 = "another update";
resource.put(x);
(Imagine I have an input control and a button "Save", this allows me to change a part of object "x" shown in an input control, then on button click PUT changes to server, then continue editing and maybe "save" another change to "x")
Following different proposals on how to implement optimistic locking in REST APIs, the above code MUST fail, because version mark (however implemented) for "x" as returned by get() will become stale after put().
Then how do you people usually make it work?
Or do you just re-GET objects after every PUT?

You can use "conditional" actions with HTTP, for example the If-Match header described here:
https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.24
In short: You deliver an ETag with the GET request, and supply this ETag back to the server in the If-Match header. The server will respond with a failure if the resource you are trying to PUT has another ETag. You can also use simple timestamps with the If-Unmodified-Since header.
Of course you will have to make your server code understand conditional requests.
For multiple steps, the PUT can indeed return the new representation, it can therefore include the new ETag or timestamp too. Even if the server does not return the new representation for a PUT, you could still use the timestamp from the response with an If-Unmodified-Since conditional PUT.

Here is probably what I was looking for: https://www.rfc-editor.org/rfc/rfc7231#section-4.3.4
They implicitly say that we CAN return ETag from PUT. Though only in the case server applied the changes as they were given, without any corrections.
However this raises yet another question. In real world app PUT caller will run asynchronously in JS gui, like in my example in the question. So, Save button might be pressed several times with or without entering any changes. If we don't use optimistic locking, then supposed PUT idempotency makes it safe to send another PUT query with each button click, as long as the last one wins (but actually if there were changes then it's not guaranteed, so the question remains).
But with optimistic locking, when first PUT succeeds, it returns updatred ETag, ok? And if there is another PUT request running, still with outdated tag version, that latter request will get 412 and the user will see a message "someone else changed the resource" - but actually it was our former changes.
What do you usually do to prevent that? Disable the Save button until its request is fully completed? What if it times out? Or do you think it's acceptable to see concurrent-change error message if it was a timeout, because the stability is already compromised anyway?

Related

Should I use POST or GET if I want to send an Array of filters to fetch all articles related with those filters

I havent find ressources online to solve my problem.
I'm creating an app with React Native that fetches and shows news articles from my database.
At the top of the page, there's some buttons with filters inside, for example:
one button "energy",
one button "politics"
one button "people"
one button "china"
etc...
Everytime I press one of those buttons, the filter corresponding is stored in an array "selectedFilters", and I want to fetch my database to only show articles that are corresponding to those filters.
Multiple filters can be selected at the same time.
I know one way of doing it, with a POST request:
await fetch('187.345.32.33:3000/fetch-articles', {
method: 'POST',
headers: {'Content-Type':'application/x-www-form-urlencoded'},
body: 'filters=${JSON.stringify(selectedFilters)}'
});
But the fact is, I read everywhere, and I also was teached, that POST request are used when creating or removing, and theoretically, what I should use is a GET request.
But I don't know how to send an Array with GET request.
I read online that I can pass multiple parameters to my url(for example: arr[0]=selectedFilters[0]&arr[1]=... but the fact is I never know in advance how many items will be in my array.
And also I'm not sure if I could write exactly the same way as my POST request above, but with GET:
await fetch('187.345.32.33:3000/fetch-articles', {
method: 'GET',
headers: {'Content-Type':'application/x-www-form-urlencoded'},
body: 'filters=${JSON.stringify(selectedFilters)}'
});
or if I can only pass items in the url, but does this work ?
await fetch('187.345.32.33:3000/fetch-articles?arr[0]=${selectedFilters[0]', {
Or even better if something like this could work:
await fetch('187.345.32.33:3000/fetch-articles?filters=${JSON.stringify(selectedFilters)}', {
Thanks for your help
You should definitely use a GET request if your purpose is to fetch the data.
One way of passing the array through the URL is by using a map function to create a comma separated string with all the filters. This way you would not need to know in advance how many elements are in the array. The server can then fetch the string from the URL and split it on the commas.
One more method you can try is to save a filters array on the server side for the session. You can then use a POST/PUT request to modify that array with new filter as user adds or remove them. Finally you can use an empty GET request to fetch the news as the server will already have the filters for that session.
But the fact is, I read everywhere, and I also was teached, that POST request are used when creating or removing, and theoretically, what I should use is a GET request.
Yes, you do read that everywhere. It's wrong (or at best incomplete).
POST serves many useful purposes in HTTP, including the general purpose of “this action isn’t worth standardizing.” (Fielding, 2009)
It may help to remember that on the HTML web, POST was the only supported method for requesting changes to resources, and the web was catastrophically successful.
For requests that are effectively read only, we should prefer to use GET, because general purpose HTTP components can leverage the fact that GET is safe (for example, we can automatically retry a safe request if the response is lost on an unreliable network).
I'm not sure if I could write exactly the same way as my POST request above, but with GET
Not quite exactly the same way
A client SHOULD NOT generate content in a GET request unless it is made directly to an origin server that has previously indicated, in or out of band, that such a request has a purpose and will be adequately supported. An origin server SHOULD NOT rely on private agreements to receive content, since participants in HTTP communication are often unaware of intermediaries along the request chain. -- RFC 9110
The right idea is to think about this in the framing of HTML forms; in HTML, the same collection of input controls can be used with both GET and POST. The difference is what the browser does with the information.
Very roughly, a GET form is used when you want to put the key value pairs described by the submitted form into the query part of the request target. So something roughly like
await fetch('187.345.32.33:3000/fetch-articles?filters=${JSON.stringify(selectedFilters)}', {
method: 'GET'
});
Although we would normally want to be using a URI Template to generate the request URI, rather than worrying about escaping everything correctly "by hand".
However, there's no rule that says general purpose HTTP components need to support infinitely long URI (for instance, Internet Explorer used to have a limit just over 2000 characters).
To work around these limits, you might choose to support POST - it's a tradeoff, you lose the benefits of safe semantics and general purpose cache invalidation, you gain that it works in extreme cases.

Should I clean my objects before sending them to the server

I were working on my app today and when my friend looked on my code he told me that before I'm making an HTTP request to update objects I should remove the properties that are not used in my server and I didn't understand why.
I didn't find any best practice or any explanation on the web why it is better to clean my objects before sending them to my server...
Let's say I have a dictionary with 100 keys & values with the same properties (but different values) like this one:
{
'11':{'id':11, 'name':'test1', 'station':2, 'price': 2, 'people':6, 'show':true, 'light': true},
'12':{'id':12, 'name':'test2', 'station':4, 'price': 2, 'people: 1, 'show': true, 'light': false},
....
}
The only thing I need to change is the station of each pair. The new station number is set on my client and sent to my server to make an update in my DB for each pair...
Should I iterate over the dictionary and clean every object before making an HTTP request to my server as my friend said?
I can not add a comment because of my reputation, so I'll put as an answer
Not necessarily, it depends a lot on how your server's API works, if it expects an entire object, it's no use cleaning, now if you have the option to send only the modified element, you do not have to send the entire object.
The HTTP request will work in the same way with a single piece or with an integer object, but you can shorten the data traffic in kbps by sending less, only the Required, like as the changed values
Summary, it depends a lot on your approach, working single values and not whole objects you can do more generic functions and improve their entire scope.
Check: THIS It's similar to your question.
EDIT:
Maybe the cleanup he's referring to, is the question of clearing the code and sending only the necessary, so I understood the scope of the question
Remember that the less you pass, the more intact the original object will be (on the server).
It is a good practice to create generic (modulable) functions that only work with the necessary changes.
Couple reasons that come to mind:
plan for the unknown: today, your server doesn't care about the people attribute. But imagine you add something server side and a people attribute appears and is a string. Now all your clients fail, because they try to push numbers to a string
save the world: data is energy, and you're wasting it by sending more data that your server can handle, even if it's just a little
save your own energy: sending more attributes is likely to mean more work (to write the code and/or test it)

Can I save changes to objects to another TR besides those they are locked?

When I try to switch to edit mode for a Report source, a popup comes up telling me
"A new task will be created for the following request of user XXX".
A transport request is also being suggested.
I don't want to save my changes in this request however, but in another existing one. I am not aware of any versioning systems being implemented in my system, and don't know how to check that.
Is what i'm trying to achieve possible? And if so, how?
No, this is not possible. There are very good reasons for this being an exclusive lock -- reasons that you should know about before you attempt to change anything. Briefly speaking
The CTS only notes that an object was touched, not what change was made.
When the transport is released, the entire object in its current state is exported - there is no delta/diff logic involved.
Therefore you can't separately transport changes to the same development object. Furthermore, if you serialize this manually, the second transport will always comprise the changes of the first one.
Things get slightly more complicated with partial objects - you can have LIMU METH objects (methods of a class) in different transports, but as soon as you try to lock the R3TR CLAS main class, you'll have to resolve that.

How do I cancel an asynchronous operation in Silverlight/WCF?

I am calling an asynchronous service from my Silverlight app and I want to be able to cancel that call after it is made. There is an option for e.Cancelled once the service has finished (i.e. If e.Cancelled Then), but how to you set that cancelled to true after you have called it? How do you cancel that asynchronous call?
Let me clarify a bit... what I am trying to do is call the SAME method twice, one right after the other, and get the results of the last call into my collection. If I call an asynchronous method twice there is no guarantee that the second call will return first, so I may end up with the results of the first call coming in last and having the wrong results in my collection. So what I would like to do is cancel the first call when I make the second so I don't get results back from the first call. Seeing as how there is a Cancelled flag in the completed event args I figure you should be able to do this. But how?
It's async... the transfer is passed off to a remote server and it does not return until the server is done with it.
Basically the server will keep going, but you don't have to wait for the response. Disconnect your service completed event handler and pretend it was never called. That will give the effect of cancelling the operation.
If you really need to cancel something in progress on the server you would need to make another call to the server to cancel the first call. Assuming the first call is a very slow one, this might be possible.
Update (as question changed)
In the case you specify, it will be up to the server to cancel a operation in progress if a second one comes through, not up to the client. e.Cancelled is set server-side.
However... :)
You have exposed a client usability issue. Shouldn't you also delay sending any service request until an idle delay has passed. That way rapid selections will not result in multiple service calls.
Also... :>
You may also want to send a sequence number to your service calls and return that as part of the result. Then you will know if it is the latest request or not.
It sounds like what you really want to do is ignore the responses of all but the most recent call.
Set a unique ID (could be request #, a Guid, timestamp, or whatever) with the request, and make sure the service sends that same value back. Keep around the ID of the most recent request and ignore response that don't match that ID.
This will be safer than cancelling the first request, since if the service has already started sending the response before the cancel request happens, you still get your error condition.

How can I prevent database being written to again when the browser does a reload/back?

I'm putting together a small web app that writes to a database (Perl CGI & MySQL). The CGI script takes some info from a form and writes it to a database. I notice, however, that if I hit 'Reload' or 'Back' on the web browser, it'll write the data to the database again. I don't want this.
What is the best way to protect against the data being re-written in this case?
Do not use GET requests to make modifications! Be RESTful; use POST (or PUT) instead the browser should warn the user not to reload the request. Redirecting (using HTTP redirection) to a receipt page using a normal GET request after a POST/PUT request will make it possible to refresh the page without getting warned about resubmitting.
EDIT:
I assume the user is logged in somehow, and therefore you allready have some way of tracking the user, e.g. session or similar.
You could make a timestamp (or a random hash etc..) when displaying the form storing it both as a hidden field (just besides the anti Cross-Site Request token I'm sure you allready have there), and in a session variable (wich is stored safely on your server), when you recieve a the POST/PUT request for this form, you check that the timestamp is the same as the one in session. If it is, you set the timestamp in the session to something variable and hard to guess (timestamp concatenated with some secret string for instance) then you can save the form data. If someone repeats the request now you won't find the same value in the session variable and deny the request.
The problem with doing this is that the form is invalid if the user clicks back to change something, and it might be a bit to harsh, unless it's money you're updating. So if you have problems with "stupid" users who refresh and click the back-button thus accidentally reposting something, just using POST would remind them not to do that, and redirecting will make it less likely. If you have a problem with malicious users, you should use a timestampt too allthough it will confuse users sometimes, allthough if users is deliberately posting the same message over and over you probably need to find a way to ban them. Using POST, having a timestam, and even doing a full comparison of the whole database to check for duplicate posts, won't help at all if the malicious users just write a script to load the form and submit random garbage, automatically. (But cross-site-request protection makes that a lot harder)
Using a POST request will cause the browser to try to prevent the user from submitting the same request again, but I'd recommend using session-based transaction tracking of some kind so that if the user ignores the warnings from the browser and resubmits his query your application will prevent duplication of changes to the database. You could include a hidden input in the submission form with value set to a crypto hash and record that hash if the request is submitted and processed without error.
I find it handy to track the number of form submissions the user has performed in their session. Then when rendering the form I create a hidden field that contains that number. If the user then resubmits the form by pressing the back button it'll submit the old # and the server can tell that the user has already submitted the form by examining what's in the session to what the form is saying.
Just my 2 cents.
If you aren't already using some sort of session-management (which would let you note and track form submissions), a simple solution would be to include some sort of unique identifier in the form (as a hidden element) that is either part of the main DB transaction itself, or tracked in a separate DB table. Then, when you are submitted a form you check the unique ID to see if it has already been processed. And each time the form itself is rendered, you just have to make sure you have a unique ID.
First of all, you can't trust the browser, so any talk about using POST rather than GET is mostly nerd flim-flam. Yes, the client might get a warning along the lines of "Did you mean to resubmit this data again?", but they're quite possibly going to say "Yes, now leave me alone, stupid computer".
And rightly so: if you don't want duplicate submissions, then it's your problem to solve, not the user's.
You presumably have some idea what it means to be a duplicate submission. Maybe it's the same IP within a few seconds, maybe it's the same title of a blog post or a URL that has been submitted recently. Maybe it's a combination of values - e.g. IP address, email address and subject heading of a contact form submission. Either way, if you've manually spotted some duplicates in your data, you should be able to find a way of programmatically identifying a duplicate at the time of submission, and either flagging it for manual approval (if you're not certain), or just telling the submitter "Have you double-clicked?" (If the information isn't amazingly confidential, you could present the existing record you have for them and say "Is this what you meant to send us? If so, you've already done it - hooray")
I'd not rely on POST warnings from the browser. Users just click OK to make messages go away.
Anytime you'll have a request that needs to be one time only e.g 'make a payment', send a unique token down, that gets submitted back with the request. Throw the token out after it comes back, and so you can now tell when something is a valid submission (anything with a token that isn't 'active'). Expire active tokens after X amount of time, e.g. when a user session ends.
(alternately track the tokens that have come back, and if you have received it before then it is invalid.)
Do a POST every time you alter data, but never return an HTML response from a post... instead return a redirect to a GET that retrieves the updated data as a confirmation page. That way, there is no worry about them refreshing the page. If they refresh, all that will happen is another retrieve, never a data-altering action.

Resources