For example, I want to search/insert/get/delete data from database and I'm working with wcf RESTful service.
I have one method for getting data from the table, one method for searching in the table, one method for inserting data in the table and one method for deleting data from the table.
I know that every of these methods can be POST or GET.
But, what is smartest? What is the best practice?
My opinion is that the search and the get method should be GET. The insert and the DELETE method should be POST.
Am I right?
You are right. The thing about GET is that it should be idempotent as the client (browser) is free to send repeat GETs anytime they want. However the POST can only be sent once (according to the rules).
So anything that changes anything could be a POST. Strictly speaking the delete could be a GET as well, as the resend of the GET will not hurt the delete, but generally it's better if you respect the spirit of the HTTP protocol. See the HTTP RFC 2616 for more details.
Wiki has a good overview of the HTTP verbs and their use.
If I were you, I'd use:
GET for search and get operations (since they will not modify data; it's safe to call these operations multiple times)
POST for the insert operation
DELETE for the delete operation
(IIS has no problem with the DELETE verb.)
Yes, that's the convention.
Use POST for operations that change data or system state. Use GET for queries that don't change anything.
Rails, for example, enhances this by also using PUT and DELETE, but this is not supported by most webservers (so there's a workaround for this).
References:
Nginx does not include support for PUT and DELETE by default: sorry, only Russian doc is available.
Same for Apache.
These two have 70% of the market.
Related
I'm using datomic on the server side, with multiple reagent atoms on the client, and now looking at trying datascript on the client.
Currently, I'm passing across a nested structure via an initial api load, which contains the result of a datomic pull query. It's pretty concise, and works fine.
However, now looking to explore the potential benefits of datascript. Selling point there is it seems to allow to retain normalisation right down to the attribute level. However, I come across an initial hurdle. Datascript isn't, as I'd imagined (perhaps, hoped...), a way to just subset your datomic db and replicate it on the client. Problem is, datomic's entity ids cannot be shared to datascript, specifically - when you transact! entities into datascript, a new eid (datascript's) is issued for each entity.
I haven't worked through all of the consequences yet but it appears it would be necessary to store :datomic-id in datascript, in addition to datascript's own newly issued :db/id, and ref types are going to use datascript's id, not datomics. This potentially complicates synchronisation back to datomic, feels like it could create a lot of potential gotchas, and isn't as isomorphic as I'd hoped. But still working on it. Can anyone share experience here? Maybe there's a solution...
Update:
wonder if a solution is to ban use of datomic's :db/id on the client, enforcing this by filtering them out of initial load; not passing them to client at all. Then any client -> server communication would have to use the (server-generated) slugs instead, which are passed in the initial load.
So, all entities would have different ids on the client, but we ban passage of server id to client, so client id if accidentally passed to server should probably say eid not found. There are likely more issues with this, haven't worked it right through yet.
You also have to think in entities, not datoms, when passing to & inserting to client, so as to create the correct refs there (or perhaps could insert a tree, if wrangle that).
So I've discovered that the datomic/datascript partnership certainly isn't just a case of 'serialise a piece of your database' - that might work if using datascript on the server, which is not the use case here at all (db persistence being required).
If I remember correctly, Datomic uses all 64 bits for entity ids, but in JavaScript (and by extension in DataScript) there’re only 53-bit integers max. So some sort of translation layer is necessary either way, no way around it.
P.S. you can totally specify :db/id to whatever you want in DataScript and it’ll use that instead of generating its own. Just make sure to fit in 53 bits.
This is not much a technical question but more a reflexion about the subject.
REST have democtratized a good way of serving resources using HTTP protocols and let developper do the cleanest project by splitting resources and user interface (back-end now really deals about back-end only with the use of REST APIs).
About those API, most of the time we use GET, PUT/PATCH (hmm meh ?), POST and DELETE, both mimic CRUD database.
But as time spent on our projects goes by, we feel the UX can be improved by adding tons of great features. For example, why should the user feel afraid of deleting a resource ? Why do not just put a recovery system (like we can see in Google Keep app that let us undo a deletion which I think is awesome in term of UX).
One of the practices that preventing unintentionnal deletion is the use of a column in the table that represents the resource. For example, I am about to delete a book, so by clicking the delete button I will only flag this row as "deleted = TRUE" in my database, and prevent displaying the rows that are deleted when browsing the list of resource (GET).
This last comes in conflict with our dear and loved REST pattern, as there is no distinction between DELETE and DESTROY "methods".
What I mean is, should we think about making REST evolving to our UX needs, so by that I mean also making HTTP protocols evolving, or should this stays as a puristic resource management and we should instead follow HTTP protocol without trying to bother it and just adapt to it using workaround (like using PATCH for soft deletion) ?
Personnaly I would like to see at least 4 new protocols as we are trying to qualify a resource as good as possible :
DELETE becomes a way to prevent others methods to have an impact on it
DESTROY becomes more dramatic by completely removing trace of this resource
RECOVER is a way to say to the other methods "hey guys, he is coming back, stay tuned"
TRASH is a GET like but only for the DELETED resources
What made me think about it is my research of a clean REST solution to deal with this resource behavior. I have seen some website posts including
https://www.pandastrike.com/posts/20161004-soft-deletes-http-api
https://philsturgeon.uk/rest/2014/05/25/restful-deletions-restorations-and-revisions/
...
That advice us to use PUT or PATCH to make soft deletion something usable but I kind of feel it does not sounds right, does not it ?
My thoughts about this problem :
Is there a big step between proposing new HTTP methods and update previous methods (I heard HTTP/2 is a thing, maybe we could ship those in ?)
Does it make sense outside the web developpement realm ? I mean does this changes could impact other domains that our ?
I'm not sure this makes sense even within the web development realm; the starting premises seem to be wrong.
RFC 7231 offers this explanation for POST
The POST method requests that the target resource process the representation enclosed in the request according to the resource's own specific semantics.
Riddle: if this is the official definition of POST, why do we need GET? Anything that the target can do with GET can also be done with POST.
The answer is that the additional constraints on GET allow participants to make intelligent decisions using only the information included in the message.
For example, because the header data informs the intermediary component that the method is GET, the intermediary knows that the action upon receiving the message is safe, so if a response is lost the message can be repeated.
The entire notion of crawling the web depends upon the fact that you can follow any safe links to discover new resources.
The browser can pre-fetch representations, because the information encoded in the link tells it that the message to do so is safe.
The way that Jim Webber describes it: "HTTP is an application, but it's not your application". What the HTTP specification does is define the semantics of messages, so that a generic client can be understood by a generic server.
To use your example; the API consumer may care about the distinction between delete and destroy, but the browser doesn't; the browser just wants to know what message to send, what the retry rules are, what the caching rules are, how to react to various error conditions, and so on.
That's the power of REST -- you can use any browser that understands the media-types of the representations, and get correct behavior, even though the browser is completely ignorant of the application semantics.
The browser doesn't know that it is talking to an internet message board or the control panel of a router.
In summary: your idea looks to me as though you are trying to achieve richer application semantics by changing the messaging semantics; which violates separation of concerns.
I'm reaching out to gain perspective on possible solutions to this problem. I'll be using Angular and Rails, but really this problem is a bit more abstract and doesn't need to be answered in the context of these frameworks.
What are the best practices for managing complex nested SQL associations on the front-end?
Let's say you have posts and comments and comments are nested under posts. You send your posts to the front-end as JSON with comments nested under them. Now you can display them listed under each post, great. But then questions arise:
What if you want to display recent comments as well? Your comment service would need to have comments in a normalized collection or gain access to them in a fashion that allows them to be sorted by date.
Does this mean you make a separate API call for comments sorted by date? This would duplicate comments on the front-end and require you to update them in two places instead of one (one for the posts and one for the comment, assuming comments can be edited or updated).
Do you implement some kind of front-end data normalization? This meaning you have a caching layer that holds the nested data and then you distribute the individual resources to their corresponding service?
What if you have data that has varying levels of nesting? Continuing with the posts and comments example. What if your comments can be replied to up until a level of 10?
How does this effect your data model if you've made separate API calls for posts and comments?
How does this effect your caching layer if you choose that approach?
What if we're not just talking about posts? What if you can comment on photos and other resources?
How does this effect the two options for data-modeling patterns above?
Breaking from the example, what if we were talking about recursive relationships between friended users?
My initial thoughts and hypothetical solution
My initial thought and how I'd attack this is with a caching layer and normalize the data such that:
The caching layer handles any normalization necessary
The caching layer holds ONE canonical representation of each record
The services communicate with the caching layer to perform CRUD actions
The services generally don't care nor do they need to know how nested/complex the data model is, by the time the data reaches the services it is normalized
Recursive relationships would need to be finite at some point, you can't just continue nesting forever.
This all of course sounds great, but I see lots of potential pitfalls and wish to gain perspective. I'm finding it difficult to separate the abstract best practices from the concrete solutions to specific data models. I very interested to know how others have solved this problem and how they would go about solving it.
Thanks!
I assume you will use restful apis, attention I don't know rails but I will suggest you some general practices that you might consider
Let's say you have one page that shows 10 posts and their 10 comments sorted by date, make this response possible in one api call
There is one more page that shows only 5 posts and no comments use the same api end-point
Make this possible with some query parameters.
Try to optimize your response as much as you can.
You can have multiple response type in one end-point, in any programming languages, if we talking about APIs thats how I do the job.
If you query takes much time, and that query runs serveral times then of course you need to cache but talking about 10 posts in each api call doesn't need caching. It should not hard on database.
For nesting problem you can have a mechanism to make it possible i.e
I will fetch 10 posts and their all comments, I can send a query parameter that I want to include all comments of each post
like bar.com/api/v1/posts?include=comments
if I need only some customized data for their comments, I should be able to implement some custom include.
like bar.com/api/v1/posts?include=recent_comments
You API layer, should first match with your custom include if not found go on relation of the resources
for deeper references, like comments.publisher or recent_comments.publisher your API layer needs to know which resource are you currently working on it. You won't need this for normal include, but custom includes should describe that what model/resource they are point to that way it is possible to create endless chain
I don't know Rails, but you can make this pattern easily possible if you have a powerful ORM/ODM
Sometimes, you need to do some filtering same goes for this job too.
You can have filter query parameter and implement some custom filters
i.e
bar.com/api/v1/posts?include=recent_comments&filters=favorites
or forget about everything and make something below
bar.com/api/v1/posts?transformation=PageA
this will return 10 recent posts with their 10 recent comments
bar.com/api/v1/posts?transformation=PageB
this will return only 10 recent posts
bar.com/api/v1/posts?transformation=PageC
this will return 10 recent post and their all comments
How does Zapier/IFTTT implement the triggers and actions for different API providers? Is there any generic approach to do that, or they are implemented by individual?
I think the implementation is based on REST/Oauth, that is generic from high level to see. But for Zapier/IFTTT, it defines a lot of trigger conditions, filters. These conditions, filters should be specific to different provider. Is the corresponding implementation in individual or in generic? If in individual, there must be a vast labor force. If in generic, how to do that?
Zapier developer here - the short answer is, we implement each one!
While standards like OAuth make it easier to reuse some of the code from one API to another, there is no getting around the fact that each API has unique endpoints and unique requirements. What works for one API will not necessarily work for another. Internally, we have abstracted away as much of the process as we can into reusable bits, but there is always some work involved to add a new API.
PipeThru developer here...
There are common elements to each API which can be re-used, such as OAuth authentication, common data formats (JSON, XML, etc). Most APIs strive for a RESTful implementation. However, theory meets reality and most APIs are all over the place.
Each services offers its own endpoints and there are no commonly agreed upon set of endpoints that are correct for given services. For example, within CRM software, its not clear how a person, notes on said person, corresponding phone numbers, addresses, as well as activities should be represented. Do you provide one endpoint or several? How do you update each? Do you provide tangential records (like the company for the person) with the record or not? Each requires specific knowledge of that service as well as some data normalization.
Most of the triggers involve checking for a new record (unique id), or an updated field, most usually the last update timestamp. Most services present their timestamps in ISO 8601 format which makes parsing timestamp easy, but not everyone. Dropbox actually provides a delta API endpoint to which you can present a hash value and Dropbox will send you everything new/changed from that point. I love to see delta and/or activity endpoints in more APIs.
Bottom line, integrating each individual service does require a good amount of effort and testing.
I will point out that Zapier did implement an API for other companies to plug into their tool. Instead of Zapier implementing your API and Zapier polling you for data, you can send new/updated data to Zapier to trigger one of their Zaps. I like to think of this like webhooks on crack. This allows Zapier to support many more services without having to program each one.
I've implemented a few APIs on Zapier, so I think I can provide at least a partial answer here. If not using webhooks, Zapier will examine the API response from a service for the field with the shortest name that also includes the string "id". Changes to this field cause Zapier to trigger a task. This is based off the assumption that an id is usually incremental or random.
I've had to work around this by shifting the id value to another field and writing different values to id when it was failing to trigger, or triggering too frequently (dividing by 10 and then writing id can reduce the trigger sensitivity, for example). Ambiguity is also a problem, for example in an API response that contains fields like post_id and mesg_id.
Short answer is that the system makes an educated guess, but to get it working reliably for a specific service, you should be quite specific in your code regarding what constitutes a trigger event.
I have a RESTful web service that runs on the Google App Engine, and uses JPA to store entities in the GAE Data Store.
New entities are created using a POST request (as the server will generate the entity ID).
However, I am uncertain as to the best status code to return, as the GAE DS is eventual consistent. I have considered the following:
200 OK: RFC states that the response body should contain “an entity describing or containing the result of the action”. This is achievable as the entity is updated with it's generated ID when it is persisted to the DS, therefore it is possible to serialize and return the updated entity straight away. However, subsequent GET requests for that entity by ID may fail as all nodes may not yet have reached consistency (this has been observed as a real world problem for my client application).
201 Created: As above, returning a URI for the new entity may cause the client problems if consistency has not yet been reached.
202 Accepted: Would eliminate the problems discussed above, but would not be able to inform the client of the ID of the new entity.
What would be considered best practice in this scenario?
A get by key will always be consistent, so a 200 response would be Ok based on your criteria unless there is a problem in google land. Are you certain you observed problems are from gets rather than queries. There is a difference between a query selecting a KEY vs a GET by key.
For a query to be consistent it must be an ancestor query, alternately a GET is consistent, anything else may see inconsistent data as indexes have yet to be updated.
This is all assuming there isn't an actual problem in google land. We have seen problems in the past, where datacenters where late replicating, and eventual consistancy was very late, sometimes even hours.
But you have no way of knowing that, so you either have to assume all is OK, or take an extremely pessimistic approach.
It depends on which JSON REST Protocoll you are using. Just always returning a json Object is not very RESTful.
You should look at some of these:
jsonapi.org/format/
http://stateless.co/hal_specification.html
http://amundsen.com/media-types/collection/
To answer you Question:
I would prefer using a format, where the Resource itself is aware of it's URL, so I would use 201 but return also the whole ressource.
The easiest way would be be to use jsonapi with a convenious url schema, so you are able to find a ressource by url because you know the id.