External API calls from the frontend or backend? - angularjs

Scenario:
I have a Node and Angular web app.
It needs to call an external api (a third party service) for data (more specifically this: https://api.represent.me/api/questions/).
Question:
Is it better to make this external call from the Angular frontend: GET http://thirdpartyservice.com/api/data or have the frontend calling a same domain Node endpoint: GET http://example.com/node-backend-api which then calls GET http://thirdpartyservice.com/api/data which then fetches and processes the data from the third party api before passing it back to angular?
Thoughts:
I guess two api calls is less desirable, but it is on the same domain
so would this not really be an issue?
GETing from the Node side would be more secure (especially if secret
keys were used), and also mask the fact that a third party service is
used.
CORS stuff might get in the way if calling from the frontend.
Is context key here, e.g. calling font apis from the
frontend is probably best, but fetching and needing to process data
is probably better from the backend.
What do others recommend (and do) and are there any other for or against points to add to the 'thoughts' too?

It depends on what your 3rd party API requires.
If you need some credentials to call the API it's probably better to handle the call in backend because of security concerns.
If the API delivers time sensitive data, like some auto-complete information as you type, it might be good to not do the extra roundtrip to the backend and call it from the frontend.
You might create a subdomain which points to the 3rd party server,
like 3rdparty-api.yourdomain.com, this removes a lot of cross-domain issues. But this needs cooperation of your 3rd party provider.
So, there is no clear yes or no answer but it depends on the situation and focus of your API.

Your solution looks fine, the only thing that may get in your way is if the 3rd party API you are using provides any sort of analytics. If you call it from Node you will overwrite the Agent and IP information that would be gathered if you called from UI. Other than that, I believe making the request directly from UI could reduce a little bit the load on the server, but I don't know if that matters to you.

I would say we should also take care about code duplication. In your case you are all JavaScript, but that is not true for many others. So let's say I consume api.github.com so I will not want to make some calls from frontend and some from the backend, then I think creating a controller which will handle all of this is a good choice.
Except for the cases like any analytics or tracking software, an extra round trip is ok.
As #Wolffc said, this can also prevent sending access_token to the browser which may be misused.

Related

How to prevent use my API which gives data for my React app

I make a web service and I'm going to use a React. A data for the service will be fetch from my API.
But there is a simple way to find out which endpoints I'm using, and what data I'm sending. This knowledge gives a lot options to make bots for my service.
Is there any option to prevent this?
I know, I can require a signing all requests, but it's also easy to get to know.
This cannot be done. Whatever is done in client-side JavaScript, can be reverse-engineered and simulated.
Efforts should be focused on preventing API from being abused, i.e. throttling or blacklisting clients based on their activity or available information (user agent, suspicious request, generated traffic). If the use of API allows captcha, suspicious clients can be asked for proving their humaneness.
There are half-measures that can be applied to client side application and make it less advantageous for abuse (and also for development).
Prevent unauthorized access to unminified/unobfuscated JS AND source maps. There may be a need to authorize them on per user basis. This will make debugging and bug reporting more difficult
Hard-code parts that are involved in request signing to browser APIs, e.g.:
apiKey = hash(NOT_SO_SECRET_KEY + document.querySelector('.varyingBlock').innerHTML)
This requires bots to emulate browser environment and makes their work much less efficient. This also affects the design of the application in negative way. Obviously, there will be additional difficulties with SSR and it won't translate to native platforms easily.
here two basic preventive measures that you can use.
Captcha
Use a captcha service like recaptcha. so that user can use your website only after passing the captcha test. Its highly difficult for bots to pass the captchas.
Rate Limit Api usage.
Add rate limiting to your api. so that a logged in user can only make 100 requests in 10 minutes, the numbers will depend on you use case

Is implementing client-side validation through an API realistic when it comes to performance?

So in an ideal world both client side validation and server side validation can be defined in one place, so the validation only has to be written once and can be reused where ever.
The idea i have to solve this is to do all the validation through an API using ASP.NET Core. When the form data on the client changes it will send an AJAX request with the updated data model, which the API validates and in turn returns any possible errors. The client then shows these errors to the user directly.
This way it still looks like the good old client-side validation, but it actually all happens on the server.
I can already imagine the server load is going to be increased since a lot more API calls will be send, however the question is:
will this server load be manageable in for example a big enterprise application with huge forms and complex validation?
And are there any other big drawbacks of this solution which i have to watch out for?
You are talking about an API not any other type of application with a back-end.
In this world, yes the validation of the payloads is important and needs to happen on the API side. In a way, the validation is the easiest part and less resource consuming since this is the first thing you check and if it doesn't pass then the API returns a 400 BadRequest HTTP code and nothing else happens.
There are systems where the validation, especially business rules validation does not happen on the API side. You could have for example a financial platform and the API is simply the gateway inside that world. In this case, the API acts as a pass-through and doesn't do much itself.
This being said, everything is susceptible to too much traffic, but you should be able to either throw enough resources at it, or have it deployed in the cloud and let it scale based on demand. You can load test APIs as well, to see how well they do under pressure, you must have an idea of how many calls you can expect in a certain period of time.
I wouldn't worry too much about it, I'd say validate what you can client side, so you don't even hit the API if there is no need for it and leave the rest to the API

Serve REST-API data in web-page without exposing API-endpoint

I am beginner in MEAN stack.
When invoking unauthenticated REST API (no user log-in required), API end-points are exposed in the JS files. Looked through forums that, there is no way to prevent abusers using the API end-point directly or even creating their own web/app using those end-points. So my question is, are there any way to avoid exposing the end-points in the JS files?
On a similar note, are there any ways, not to use REST calls on front-end and still be able to serve those dynamic content/API output data in a MEAN stack? I use EJS.
There's no truly secure way to do this. You can render the page on the server instead, so the client only sees HTML (and some limited JS).
First, if you don't enable CORS, your AJAX calls are protected by the browser, i.e. only pages served from domain A can make AJAX calls to domain A.
Public API providers like Google Maps protect themselves by making you use an API key that they link to a Google account.
That key is still visible in the JS, but - if abused - can be easily disabled.
There's also pseudo-security through obfuscation, i.e. make it harder for an attacker to extract a common secret you are using the encrypt the API interaction.
Tools like http://javascript2img.com/ are far from perfect, but makes attackers spend a lot of time trying to figure out what your code does.
Often that is enough.
There are also various ways to download JS code on demand which can then check if the DOM it runs in is yours.

Securing a single page application built on react/react-router

I've been putting together a single-page application using React and React-Router and I can't seem to understand how these applications can be secured.
I found a nice clear blog post which shows one approach, but it doesn't look very secure to me. Basically, the approach presented in that post is to restrict rendering of components which the user is not authorized to access. The author wrote a couple more posts which are variations on the idea, extending it to React-Router routes and other components, but at their hearts all these approaches seem to rely on the same flawed idea: the client-side code decides what to do based on data in the store at the time the components are composed. And that seems like a problem to me - what's to stop an enterprising hacker from messing around with the code to get access to stuff?
I've thought of three different approaches, none of which I'm very happy with:
I could certainly write my authorization code in such a way that the client-side code is constantly checking with the server for authorization, but that seems wasteful.
I could set the application up so that modules are pushed to the client from the server only after the server has verified that the client has authority to access that code. But that seems to involve breaking my code up into a million little modules instead of a nice, monolithic bundle (I'm using browserify).
Some system of server-side rendering might work, which would ensure that the user could only see pages for which the server has decided they have authority to see. But that seems complicated and also seems like a step backward (I could just write a traditional web app if I wanted the server to do everything).
So, what is the best approach? How have other people solved this problem?
If you’re trying to protect the code itself, it seems that any approach that either sends that code to the client, or sends the code able to load that code, would be a problem. Therefore even traditional simple approaches with code splitting might be problematic here, as they reveal the filename for the bundle. You could protect it by requiring a cookie on the server, but this seems like a lot of fuss.
If hiding the internal code from unauthorized users is a requirement for your application, I would recommend splitting it into two separate apps with separate bundles. Going from one to another would require a separate request but this seems to be consistent with what you want to accomplish.
Great question. I'm not aware of any absolute best practices floating around out there that seem to outstrip others, so I'll just provide a few tips/thoughts here:
a remote API should handle the actual auth, of course.
sessions need to be shared, so a store like redis is usually a good idea, esp. for fast reads.
if you're doing server-side rendering that involves hydration, you'll need a way to share the session state between server and client. See the link below for one way to do universal react
on the client, you could send down a session cookie or JWT token, read it into memory (maybe using redux and keep it in your state tree?) and maybe use middleware (a la redux?) to set it as a header on requests.
on the client, you could also rely on localStorage to save the cookie/JWT
maybe split the code into two bundles, one for auth, one for the actual app logic?
See also:
https://github.com/erikras/react-redux-universal-hot-example for hydration example
https://github.com/erikras/react-redux-universal-hot-example/issues/608
As long as the store does not contain data that the user is not authorized to have, there shouldn't be too much of a problem even if a hacker checks the source and sees modules/links that he shouldn't have access to.
The state inside the store as well as critical logic would come from services and those need to be secured, whether it's an SPA or not; but especially on an SPA.
Also: server-side rendering with Redux isn't too complex. You can read about it here:
http://redux.js.org/docs/recipes/ServerRendering.html
It's basically only used to serve a root html with a predefined state. This further increases security and loading speeds but does not defy the idea behind SPAs.

Downsides using front-end frameworks for API handling? Performance and security

So, while practising all the new tech. Angular 2, AngularJS, Firebase, Loopback, NodeJS etc .. I'm kind of confused on some topics that people don't really talk about. It might go into too much detail, but I'll try to split it as much as I can.
Part 1 -- Performance
I like the approach of: MyApp (Web, Mobile, ..) --> API <-- Database
Okay, we perform API requests to the same server over HTTP which is slower, but for small projects this should be a non issue right? Or are there any other solutions for this matter?
I know they often just do: MyApp --> Framework <-- Database and add an API interface which calls the correct actions to get the necessary logic/data out to eg. a mobile app
-- End Part 1
Part 2 -- Security
So, assume we have an API up and running either with Lumen, LoopBack or anything else like a realtime Firebase database (not really an API). Then we can connect with it over HTTP requests via Angular, jQuery... If a user inspects our source code, they can easily see how we handle data in the backend. How can this be secured in a way that only the necessary applications have control over the API (OAuth2 ?) and also that we limit the insight of users into our API.
-- End Part 2
Thanks.
Okay, I thought, it's a "too broad" question, but actually, it has a short answer.
Performance
Irrelevant. If you gotta fetch data, you gotta fetch data. Be it API call or some custom action in your laravel code (or something). Same HTTP stuff.
Security
... where a user can check the source code of the API calls.
Security through obscurity doesn't work. Always consider that your client is compromised. Use proper authentication/authorization methods (OAuth and the like). So even if a malicious user knows (which he will) your api endpoint signatures (or whatever you were trying to hide), he can't abuse them.

Resources