Here's my dilemma. I have a series of tables with leads from our various contact forms. We also have tables for spam that comes in on our forms. But occasionally, a lead may get routed incorrectly to a spam table. So I need to move that lead from one table to the other, which means inserting it one and deleting it from the other.
As I understand it, when Backbone calls the save method, it checks to see if that id exists in the table. If it doesn't, it makes a POST request. If it does, it makes a PUT request. I need to be able to force Backbone to make a POST request, so that Laravel can call the right RESTful action.
See the problem is that if Backbone makes a PUT request to say, /send-message/52 (the 52 being the ID of the lead in the send-message-spam table), it will update/overwrite the existing lead with the ID of 52. I want to make a POST request to /send-message (obviously without an ID).
I can force Backbone to use a different urlRoot, but how do I force it to make a POST when I call save()?
I'm unsure if there is a configuration option to do this, but you could always override the save method in your Backbone model to have it perform the action you desire.
Related
How can I invalidate a single item when working with useInfiniteQuery?
Here is an example that demonstrates what I am trying to accomplish.
Let`s say I have a list of members and each member has a follow button. When I press on to follow button, there is a separate call to the server to mark that the given user is following another user. After this, I have to invalidate the entire infinite query to reflect the state of following for a single member. That means I might have a lot of users loaded in infinite query and I need to re-fetch all the items that were already loaded just to reflect the change for one item.
I know I can change the value in queryClient.setQueryData when follow fetch returns success but without following this with invalidation and fetch of a member, I am basically going out of sync with the server and relying on local data.
Any possible ways to address this issue?
Here is a reference UI photo just in case if it will be helpful.
I think it is not currently possible because react-query has no normalized caching and no underlying schema. So one entry in a list (doesn't matter if it's infinite or not) does not correspond to a detail query in any way.
If you prefix the query-keys with the same string, you can utilize the partial query key matching to invalidate in one go:
['users', 'all']
['users', 1]
['users', 2]
queryClient.invalidateQueries(['users]) will invalidate all three queries.
But yes, it will refetch the whole list, and if you don't want to manually set with setQueryData, I don't see any other way currently.
If you return the whole detail data for one user from your mutation, I don't see why setting it with setQueryData would get you out-of-sync with the backend though. We are doing this a lot :)
What is a general good practice for when some action - changes multiple models in Backbone.js:
Trigger multiple PUT requests for each mode.save()
Single request to sync the entire collection
In case if the quantity of the changed models greater than 1 - definitely it should be the second item.
Usually, good REST api practice seems to suggest that you should update, save, create, delete single instances of persistent elements. In fact, you will find that a Backbone.Collection object does not implement these methods.
Also, if you use a standard URI scheme for your data access point, you will notice that a collection does not have a unique id.
GET /models //to get all items,
GET /models/:id //to read an element,
PUT /models/:id //to update an element,
POST /models/:id //to create an element,
DELETE /models/:id //to delete an element.
If you need to update every model of a collection on the server at once, maybe you need to ask why and there might be some re-thinking of the model structure. Maybe there should be a separate model holding that common information.
As suggested by Bart, you could implement a PATCH method to update only changed attributes of a particular element, thus saving bandwidth.
I like the first option, but I'd recommend you implement a PATCH behavior (only send updated attributes) to keep the requests as small as possible. This method gives you a more native "auto-save" feel like Google Docs. Of course, this all depends on your app and what you are doing.
I have a situation where, in a model's afterSave callback, I'm trying to access data from a distant association (it's a legacy data model with a very wonky association linkage). What I'm finding is that within the callback I can execute a find call on the model, but if I exit right then, the record is never inserted into the database. The lack of a record means that I can't execute a find on the related model using data that was just inserted into the current.
I haven't found any mention of when data is actually committed with respect to when the afterSave callback is engaged. I'm working with legacy code, but I see no indication that we're specifically engaging transactions, so I'm trying to figure out what my options might be.
Thanks.
UPDATE
The gist of the scenario is this: We're taking event registrations, but folks can be wait listed. A user can register (or be registered) for a given Date. After a registration is complete, I need to check the wait list for the existence of a record for the registering user (WaitList.user_id) on the date being registered for (WaitList.date_id). If such a record exists, it can be deleted because it's become an active registration.
The legacy schema puts me in a place where the registration isn't directly tied to a date so I can't get the Date.id easily. Instead, Registration->Registrant->Ticket->Date. Unintuitive, I know, but it is what it is for now. Even better (sarcasm included), we have a view named attendees that rolls all of this info up and from which I would be able to use the newly created Registration->id to return Attendee.date_id. Since the record doesn't exist, it's not available in the view.
Hopefully that provides a little more context.
What's the purpose of the find query inside of your afterSave?
Update
Is it at all possible to properly associate the records? Or are we talking about way too much refactoring for it to be worth it? You could move the check to the controller if it's not possible to modify the associations between the records.
Something like (in psuedo code)
if (save->isSuccessful) {
if (onWaitList) {
// delete record
}
}
It's not best practice, but it will get you around your issue.
I have a User object that, upon successful authentication, is tucked into the session (sans security info) for easy recall and for determining whether we have an authenticated user or anonymous session. There are several paths by which the user can alter some or all of his or her information and I'd like to keep that session value up to date. The obvious answer is to update the value in the afterSave() callback, but that, of course, violates MVC.
Is there another way of capturing every change in one place so that I don't have to drop session writes all over the place? I can't think of anything, nor have I been able to find any other ideas. Am I the only person trying to do something like this?
Thanks.
Final Solution: I marked neilcrookes' response as the answer, frankly, because there doesn't seem to be the better way. Since this way violates my OCD senses, though, I took a slightly different path. I decided to have my User::authenticate() method return the authenticated user object to the caller so it can do whatever it wants with it. One of the things that the callers "want" to do is to drop that value in the session. It's redundancy, but it's very, very limited. In my mind, that felt better than accessing the session from the model (though it's certainly a damned if you do, damned if you don't scenario).
//in users controller
if ($this->User->save()) {
$this->Auth->login($this->User->read());
$this->Session->setFlash[.. etc]
And for the record, I do not agree with the answer of neilcrooks, but I will refrain from feeding the troll.
Some might disagree but I'd screw MVC, do it in Model::afterSave() and use $_SESSION - test for the session before writing to it, in case it's not started for example you are saving against the model in a shell or something.
MVC is a general pattern - a guideline, you can bang your head against it trying to figure out how to achieve something that doesn't quite fit, or just do it another way and move onto to something more important.
Bring on the flames.
after save
Use Like this
$this->Session->write('Auth.User.mmid', $kinde['Kindle']['id']);
You should be able to just use AppController to create the necessary callback(s) that keep your session data up to date. So, for instance, you could have your User model afterSave() set a property called changed to true. Then in your AppController->afterFilter() you check that property and update the session data as necessary.
Alternatively, you could write a component through which to update your user info and also your session data. Then any controller that needs to change user info just needs to include that component.
There's no need to write redundant code or break MVC.
I'm putting together a small web app that writes to a database (Perl CGI & MySQL). The CGI script takes some info from a form and writes it to a database. I notice, however, that if I hit 'Reload' or 'Back' on the web browser, it'll write the data to the database again. I don't want this.
What is the best way to protect against the data being re-written in this case?
Do not use GET requests to make modifications! Be RESTful; use POST (or PUT) instead the browser should warn the user not to reload the request. Redirecting (using HTTP redirection) to a receipt page using a normal GET request after a POST/PUT request will make it possible to refresh the page without getting warned about resubmitting.
EDIT:
I assume the user is logged in somehow, and therefore you allready have some way of tracking the user, e.g. session or similar.
You could make a timestamp (or a random hash etc..) when displaying the form storing it both as a hidden field (just besides the anti Cross-Site Request token I'm sure you allready have there), and in a session variable (wich is stored safely on your server), when you recieve a the POST/PUT request for this form, you check that the timestamp is the same as the one in session. If it is, you set the timestamp in the session to something variable and hard to guess (timestamp concatenated with some secret string for instance) then you can save the form data. If someone repeats the request now you won't find the same value in the session variable and deny the request.
The problem with doing this is that the form is invalid if the user clicks back to change something, and it might be a bit to harsh, unless it's money you're updating. So if you have problems with "stupid" users who refresh and click the back-button thus accidentally reposting something, just using POST would remind them not to do that, and redirecting will make it less likely. If you have a problem with malicious users, you should use a timestampt too allthough it will confuse users sometimes, allthough if users is deliberately posting the same message over and over you probably need to find a way to ban them. Using POST, having a timestam, and even doing a full comparison of the whole database to check for duplicate posts, won't help at all if the malicious users just write a script to load the form and submit random garbage, automatically. (But cross-site-request protection makes that a lot harder)
Using a POST request will cause the browser to try to prevent the user from submitting the same request again, but I'd recommend using session-based transaction tracking of some kind so that if the user ignores the warnings from the browser and resubmits his query your application will prevent duplication of changes to the database. You could include a hidden input in the submission form with value set to a crypto hash and record that hash if the request is submitted and processed without error.
I find it handy to track the number of form submissions the user has performed in their session. Then when rendering the form I create a hidden field that contains that number. If the user then resubmits the form by pressing the back button it'll submit the old # and the server can tell that the user has already submitted the form by examining what's in the session to what the form is saying.
Just my 2 cents.
If you aren't already using some sort of session-management (which would let you note and track form submissions), a simple solution would be to include some sort of unique identifier in the form (as a hidden element) that is either part of the main DB transaction itself, or tracked in a separate DB table. Then, when you are submitted a form you check the unique ID to see if it has already been processed. And each time the form itself is rendered, you just have to make sure you have a unique ID.
First of all, you can't trust the browser, so any talk about using POST rather than GET is mostly nerd flim-flam. Yes, the client might get a warning along the lines of "Did you mean to resubmit this data again?", but they're quite possibly going to say "Yes, now leave me alone, stupid computer".
And rightly so: if you don't want duplicate submissions, then it's your problem to solve, not the user's.
You presumably have some idea what it means to be a duplicate submission. Maybe it's the same IP within a few seconds, maybe it's the same title of a blog post or a URL that has been submitted recently. Maybe it's a combination of values - e.g. IP address, email address and subject heading of a contact form submission. Either way, if you've manually spotted some duplicates in your data, you should be able to find a way of programmatically identifying a duplicate at the time of submission, and either flagging it for manual approval (if you're not certain), or just telling the submitter "Have you double-clicked?" (If the information isn't amazingly confidential, you could present the existing record you have for them and say "Is this what you meant to send us? If so, you've already done it - hooray")
I'd not rely on POST warnings from the browser. Users just click OK to make messages go away.
Anytime you'll have a request that needs to be one time only e.g 'make a payment', send a unique token down, that gets submitted back with the request. Throw the token out after it comes back, and so you can now tell when something is a valid submission (anything with a token that isn't 'active'). Expire active tokens after X amount of time, e.g. when a user session ends.
(alternately track the tokens that have come back, and if you have received it before then it is invalid.)
Do a POST every time you alter data, but never return an HTML response from a post... instead return a redirect to a GET that retrieves the updated data as a confirmation page. That way, there is no worry about them refreshing the page. If they refresh, all that will happen is another retrieve, never a data-altering action.