With the "traditional" web framework, one could use e.g. AbstractRequestLoggingFilter for implementing a generic logging filter. With web-reactive the filter isn't called anymore (what makes sense, since it operates on HttpServletRequest).
Can anyone point me into the right direction for implementing a request filter with web-reactive, which logs the HTTP request, including its body, before and after the request like in AbstractRequestLoggingFilter?
You can implement a WebFilter and declare it as a bean, it will be picked up automatically.
Note that the WebFilter contract is based on ServerWebExchange, which holds a ServerHttpRequest. The body is not accessible directly as byte[], but rather as a Flux<DataBuffer>; this is not meant to be buffered in memory or consumed by the filter, so logging the whole request body is more complex than in MVC scenarios. Also, you should avoid blocking operations during request processing.
Related
If a request fails, HTTP POST is normally not idempotent (executing a failed request again might cause multiple inserts). What do you think about using the users session id as UUID v5 "namespace" and the JSON payload as the "name"? It would result in the same ID for multiple requests and the database would reject an additional insert.
There are APIs that specifically mark HTTP methods that are otherwise non-idempotent as idempotent.
POST being non-idempotent by default does not mean it's not allowed to be that, it just means that generic clients can't assume they are.
The best implementation I've seen is the Stripe API, that uses an Idempotency-Key as a HTTP header. The client defines this, and if 2 requests are received with an identical id, stripe knows how to handle the second. I think this is the best approach, and better than the idea of trying to construct a hash based on the request. A request looking identical does not mean the effect is the same, consider for example this POST request:
POST /increment
Content-Type: application/json
{ "increment-by": 2 }
If I send this request twice, I expect some id to be increased to 4, even if the request body was the same each time.
The Idempotency-Key lets a client control and inform the server if 2 requests were actually the same.
https://stripe.com/blog/idempotency
Followups:
Do I store the Idempotency-key as a separate column on the record?
I would be inclined to implement this feature globally as some kind of middleware.
Storing the Idempotency-key in something like Redis yields the risk of two realities (e.g. server creates db record and crashes before writing to Redis).
Use a transaction.
All you have to store about the key is that you've seen it before, and you only have to store it if the request was successful.
So in an ideal world both client side validation and server side validation can be defined in one place, so the validation only has to be written once and can be reused where ever.
The idea i have to solve this is to do all the validation through an API using ASP.NET Core. When the form data on the client changes it will send an AJAX request with the updated data model, which the API validates and in turn returns any possible errors. The client then shows these errors to the user directly.
This way it still looks like the good old client-side validation, but it actually all happens on the server.
I can already imagine the server load is going to be increased since a lot more API calls will be send, however the question is:
will this server load be manageable in for example a big enterprise application with huge forms and complex validation?
And are there any other big drawbacks of this solution which i have to watch out for?
You are talking about an API not any other type of application with a back-end.
In this world, yes the validation of the payloads is important and needs to happen on the API side. In a way, the validation is the easiest part and less resource consuming since this is the first thing you check and if it doesn't pass then the API returns a 400 BadRequest HTTP code and nothing else happens.
There are systems where the validation, especially business rules validation does not happen on the API side. You could have for example a financial platform and the API is simply the gateway inside that world. In this case, the API acts as a pass-through and doesn't do much itself.
This being said, everything is susceptible to too much traffic, but you should be able to either throw enough resources at it, or have it deployed in the cloud and let it scale based on demand. You can load test APIs as well, to see how well they do under pressure, you must have an idea of how many calls you can expect in a certain period of time.
I wouldn't worry too much about it, I'd say validate what you can client side, so you don't even hit the API if there is no need for it and leave the rest to the API
I have a page with multiple widgets, each receiving data from a different query in the backend. Doing a request for each will consume the limit the browser puts on the number of parallel connections and will serialize some of them. On the other hand, doing one request that will return one response means it will be as slow as the slowest query (I have no apriori knowledge about which query will be slowest).
So I want to create one request such that the backend runs the queries in parallel and writes each result as it is ready and for the frontend to handle each result as it arrives. At the HTTP level I believe it can be just one body with serveral json, or maybe multipart response.
Is there an angularjs extension that handles the frontend side of things? Optimally something that works well with whatever can be done in the Java backend (didn't start investigating my options there)
I have another suggestion to solve your problem, but I am not sure you would be able to implement such a thing as from you question it is not very clear what you can or cannot do.
You could implement WebSockets and the server would be able to notify the front-end about the data being fetched or it could send the data via WebSockets right away.
In the first example, you would send a request to the server to fetch all the data for your dashboard. Once a piece of data is available, you could make a request for that particular piece and given that the data was fetched couple of seconds ago, it could be cached on the server and the response would be fast.
The second approach seems a more reasonable one. You would make an HTTP/WebSocket request to the server and wait for the data to arrive over WebSocket.
I believe this would be the most robust an efficient way to implement what you are asking for.
https://github.com/dfltr/jQuery-MXHR
This plugin allows to parse a response that contains several parts (multipart) by having a callback to parse each part. This can be used in all our frontends to support responses for multiple data (widgets) in one requests. The server side will receive one request and use servlet 3 async support (or whatever exists in other languages) to ‘park’ it, sending multiple queries, writing each response to the request as each query returns (and with the right multipart boundary).
Another example can be found here: https://github.com/anentropic/stream.
While both of these may not be compatible with angularjs, the code does not seem complex to port there.
Scenario:
I have a Node and Angular web app.
It needs to call an external api (a third party service) for data (more specifically this: https://api.represent.me/api/questions/).
Question:
Is it better to make this external call from the Angular frontend: GET http://thirdpartyservice.com/api/data or have the frontend calling a same domain Node endpoint: GET http://example.com/node-backend-api which then calls GET http://thirdpartyservice.com/api/data which then fetches and processes the data from the third party api before passing it back to angular?
Thoughts:
I guess two api calls is less desirable, but it is on the same domain
so would this not really be an issue?
GETing from the Node side would be more secure (especially if secret
keys were used), and also mask the fact that a third party service is
used.
CORS stuff might get in the way if calling from the frontend.
Is context key here, e.g. calling font apis from the
frontend is probably best, but fetching and needing to process data
is probably better from the backend.
What do others recommend (and do) and are there any other for or against points to add to the 'thoughts' too?
It depends on what your 3rd party API requires.
If you need some credentials to call the API it's probably better to handle the call in backend because of security concerns.
If the API delivers time sensitive data, like some auto-complete information as you type, it might be good to not do the extra roundtrip to the backend and call it from the frontend.
You might create a subdomain which points to the 3rd party server,
like 3rdparty-api.yourdomain.com, this removes a lot of cross-domain issues. But this needs cooperation of your 3rd party provider.
So, there is no clear yes or no answer but it depends on the situation and focus of your API.
Your solution looks fine, the only thing that may get in your way is if the 3rd party API you are using provides any sort of analytics. If you call it from Node you will overwrite the Agent and IP information that would be gathered if you called from UI. Other than that, I believe making the request directly from UI could reduce a little bit the load on the server, but I don't know if that matters to you.
I would say we should also take care about code duplication. In your case you are all JavaScript, but that is not true for many others. So let's say I consume api.github.com so I will not want to make some calls from frontend and some from the backend, then I think creating a controller which will handle all of this is a good choice.
Except for the cases like any analytics or tracking software, an extra round trip is ok.
As #Wolffc said, this can also prevent sending access_token to the browser which may be misused.
I made a POST request to a Sinatra app. I noticed that the parameters arrive in the server as a StringIO. It can be read using request.body.read. However, it can only be read once. To read it again, I need to run request.body.rewind (haha, Sinatra).
Why is it designed this way? I can see this being useful in streaming data but are there other applications?
Parameters are available within Sinatra via the params hash. request.body.read and request.body.rewind are part of Rack, they are not actually implemented within Sinatra. The most common way I have used this in the past is when I'm using Sinatra strictly as a web API and serializing/de-serializing my payload.