Should I use firebase cloud functions for every http request? - reactjs

Writing a React Native App that allows users to Register, Login, Update Account Info, Post (Each post contains a short message with 1500 chars or less and/or up to 9 images) and Download Posts. I can do all these tasks without using cloud functions, but I wonder which approach is better and why?
For example, to Set user's account info, I could do something like this in my app:
firebase.database().ref(`users/${uid}`)
.set({
firstName: 'Stack',
lastName: 'Overflow'
});
Or I could simply write a firebase cloud function and every time I want to set a user's account info, I could do something like this:
const SET_ACCOUNT_URL = 'https://firebase.set_account_url.com';
axios.post(SET_ACCOUNT_URL, {
firstName: 'Stack',
lastName: 'Overflow'
})
.then(() => ...Do Something Here...)
.catch((error) => console.log(error));
Which approach is better and why?

A lot depends on scale here. If you're remaining within the free plan limits, the sky is the limit. If you're working at scale, then you'll be paying for the bandwidth to RTDB as well as the invocations of Functions, which could be superfluous.
It's hard to predict what will be useful without knowing your use case see XY problem.
As a general rule, you can always add functions later, since they can be triggered from a DB write. So if a write suffices, just do that. Later, you can trigger an event off of that write to take any further actions you need.
If you need to hide proprietary functionality (i.e. trademark algorithms or secured game logic) use a Function. If you know you'll be doing calculations on that data or if it can't be triggered by a DB event, then use a Function.
If it's just going to end up in the Database and the goal is validation or restricting access, write to the DB and use security rules.

Related

RBAC on GraphQL backend using CASL and graphql-shield and sharing rules with my React front-end

I am running Mongoose and exposing an API using GraphQL (Apollo).
I want to implement a RBAC and after some research I came with a solution using CASL and graphql-shield.
Ideally, I would then want to share the rules with my React front-end.
First step, planning on a piece of paper.
I would first define my actions: Create, Read, Update, Delete.
Then I would define my subjects: Car, Motorcycle.
After that is done I would proceed to define my roles: CarSpecialist, MotoSpecialist, Admin.
I would then define some conditions: "subject is my own", etc..
Finally, I would assign to each role, a set of abilities (combination of action, subject, conditions).
Now with all this done, I start actually coding my solution.
I start by writing the abilities in CASL: actions and subjects are pretty straightforward to define.
Conditions are a bit trickier and I have at least two options:
I use "vague" notions that in turn have to be interpreted by whatever needs to enforce them (back or front end).
I use the CASL mongoose integration plugin, at the cost of losing the ability to share with my frontend.
Any input on which to choose?
Now once CASL abilities are defined, is it up to graphql-shield to enforce them?
How do I do the mapping between (CASL) actions, subjects and conditions to graphql terms: Schema, Query, Mutations ...?
I’ll try to answer on this question as much as I can:
You don’t loose capability to share permissions with UI if use default conditions. Conditions are interpreted in js when you run ability.can. So, if mongo query language is fine for you, then no need to change it!
Graphql shield is a special kind of graphql middleware. If you use casl and graphql middlewares, you don’t need graphql shield! use either casl + custom graphql middleware or graphql-shield
Every graphql type has underlying source type. Source type is basically your domain model or just db model that encapsulates business logic. This is your mapping :) just check permissions on source type and that’s it. But if you share permissions with UI, then you need to transform backend permissions (before sending to UI) which are written for source types, to those that can be applied to graphql type! Alternatively, you can expose some private props (e.g., ownerId of Car) as part of graphql type. But if the only purpose of this is to satisfy permissions sharing, then I’d go with transformation option:
function defineAbility(user, props) {
const { can, rules} = new AbilityBuilder(Ability)
can('read', 'Post', { [props.authorId]: user.id })
// ...
return rules;
}
const currentUser = { id: 1 }
const backendRules = defineAbility(currentUser, {
authorId: 'authorId'
});
const uiRules = defineAbility(currentUser, {
authorId: 'author.id'
});
Alternatively, you can check permissions on backend and share results with frontend by exposing subtype on every graphql type:
{
cars {
items {
permissions {
canUpdate
canRead
}
}
}
}
The consequences of this is that your server will spend more time generating response, especially when you retrieve items for pagination. So, check response times before proceeding with that. The good point of this is that you don’t need casl on UI, and permissions checking logic is completely hidden on backend

How do you share state in a micro-frontend scenario?

A first idea would be cookies, yet you can run out of space really fast.
There are several ways to get communication in microfrontends.
As already noted the different microfrontends should be loosely coupled, so you'd never directly talk from one to another.
The key question is: Is your microfrontend solution composed on the server-side or client-side?
For the client side I've written an article on the communication.
If you are on the server side (question seems to go in that direction due to the mention of cookies) then I would suggest using the standard microservice patterns for communication and exchanging state. Of course, using centralized systems such as Redis cache there would help.
In general the different microfrontends should have their own state and be as independent as possible.
Usually what you want to share is not the state / data, but rather the state with an UI representation. The reason is simple: That way you don't have to deal with the representation and edge cases (what if the data is not available?). One framework to show this is Piral.
Hope that helps!
There's no shared state, that'd break the concept of the segregation that's supposed to take place. This pattern is present among all microservices architectures as it's supposed to eliminate single points of failure and other complications in maintaining a bigger store. The common approach is for each "micro frontend" to have its own store (i.e. Redux). The Redux docs have a topic on this.
First, you should avoid having shared states between MicroFrontend (MFE) much as possible. This is a best practice to avoid coupling, reduce bugs, etc..
A lot of times you don't need it, for example, every information/state that came from the server (eg: the "user" information) could be requested individually for each MFE when they need it.
However, in case you really need a shared state there are a few solutions like:
- Implement the pub/sub pattern in the Window object.
There are a few libraries that already provide this.
//MFE 1
import { Observable } from 'windowed-observable';
const observable = new Observable('messages');
observable.publish(input.value); //input.value markup is not present in this example for simplicity
//MFE 2
import { Observable } from 'windowed-observable';
const observable = new Observable('messages');
const handleNewMessage = (newMessage) => {
setMessages((currentMessages) => currentMessages.concat(newMessage)); //custom logic to handle the message
};
observable.subscribe(handleNewMessage);
reference: https://dev.to/luistak/cross-micro-frontends-communication-30m3#windowed-observable
- Dispatch/Capture Custom browser Events
Remember that Custom Events can have 'details' that allow pass information
//MF1
const customEvent = new CustomEvent('message', { detail: input.value });
window.dispatchEvent(customEvent)
//MF2
const handleNewMessage = (event) => {
setMessages((currentMessages) => currentMessages.concat(event.detail));
};
window.addEventListener('message', handleNewMessage);
This approach has the important issue that just works for 'new events dispatched', so you can read the state if you don't capture the event at the moment.
reference: https://dev.to/luistak/cross-micro-frontends-communication-30m3#custom-events
In both implementations, using a good naming convention will help a lot to keep order.

Best way to refresh token every hour?

I am building a website with React and I have to send about 3 requests per every page, but first of all I have to get communication token that needs to be refreshed every hour by the way, and then use it as a base for all other requests.
I have a plan to get it as soon as App mounts and put it in state (redux, thunk) and use it in every component that subscribes to store and then put setInterval function in componentDidMount method too. Another thing that comes to my mind is to put it in local storage but that would be a bit complicated (I have to parse every time I get something from local storage).
class App extends React.Component {
componentDidMount() {
this.props.getToken()
setInterval (this.props.getToken, 5000)
}
This works pretty well, and switching between pages doesn't spoil anything, it works pretty good. Note that here 5000 miliseconds is just for trying out, I will put it to be 3500000. Is this OK or there is another way to do this? Thanks!
Your implementation is pretty fine although I'd make a few changes
Use local storage so you don't have to refetch your token if user refreshes the page (since it'll be lost from memory). Also you'll have same benefit when working with multiple tabs. You can easily create some LocalStorageService that does all parsing/stringify for you so you don't have to worry.
I'd suggest to move that logic to some kind of service where you'll control your token flow much easier - e.g. what happens if user logs out or somehow token becomes invalid? You'd have to get new token from other place than your App (since root componentDidMount will be called only once) and also you'd need to clear the current interval (on which you won't have reference with current implementation) to avoid multiple intervals.
Instead of intervals maybe you could even use setTimeout to avoid having multiple intervals in edge cases:
getToken() {
// do your logic
clearTimeout(this.tokenExpire);
this.tokenExpire = setTimeout(() => getToken(), 5000);
}
Overall your implementation is fine - it can only be improved for easier maintenance and you'll need to cover some edge cases (at least ones mentioned above).
Ideally your server should put tokens on secured sessions so they are not vulberable to XSS.
If there's no such an option. I'd suggest using axios. You configure it to check the tokens on each request or response and handle the tokens accordingly.

How should we structure our models with microservices?

For example, if I have a microservice with this API:
service User {
rpc GetUser(GetUserRequest) returns (GetUserResponse) {}
}
message GetUserRequest {
int32 user_id = 1;
}
message GetUserResponse {
int32 user_id = 1;
string first_name = 2;
string last_name = 3;
}
I figured that for other services that require users, I'm going to have to store this user_id in all rows that have data associated with that user ID. For example, if I have a separate Posts service, I would store the user_id information for every post author. And then whenever I want that user information to return data in an endpoint, I would need to make a network call to the User service.
Would I always want to do that? Or are there certain times that I want to just copy over information from the User service into my current service (excluding saving into in-memory databases like Redis)?
Copying complete data generally never required, most of times for purposes of scale or making microservices more independent, people tend to copy some of the information which is more or less static in nature.
For eg: In Post Service, i might copy author basic information like name in post microservices, because when somebody making a request to the post microservice to get list of post based on some filter , i do not want to get name of author for each post.
Also the side effect of copying data is maintaining its consistency. So make sure you business really demands it.
You'll definitely want to avoid sharing database schema/tables. See this blog for an explanation. Use a purpose built interface for dependency between the services.
Any decision to "copy" data into your other service should be made by the service's team, but they better have a real good reason in order for it to make sense. Most designs won't require duplicated data because the service boundary should be domain specific and non-overlapping. In case of user ids they can be often be treated as contextual references without any attached logic about users.
One pattern observed is: If you have auth protected endpoints, you will need to make a call to your auth service anyway - for security - and that same call should allow you to acquire whatever user id information is necessary.
All the regular best practices for API dependencies apply, e.g. regarding stability, versioning, deprecating etc.

Firebase having thousands of on() event listeners good design

We need to run some operation on our Firebase DB and manipulate data after certain input is given by user from the Mobile Device modifying a flag.
Currently we are using on() to listen to particular flag in each users node. We are running this listener from a Nodejs server hosted on Heruku.
If we plan to have 100 thousand users we will have 100 thousand listener. One listener for each users flag which is waiting to be manipulated by user on the Mobile device.
Is this a good design in terms of Firebase?
Ideally we can create a REST API which is called by users and then on the Node JS server we can manipulate the data.
What is the best way to run background operation on Data on Firebase based on user input?
We were using Parse earlier and it was easy to achieve this using Parse Cloud code. With Firebase we are having issues because of this.
If we plan to have 100 thousand users we will have 100 thousand listener. One listener for each users flag which is waiting to be manipulated by user on the Mobile device.
This sounds like a bad data design. While it is definitely possible to listen for changes to hundreds of thousands of items, it shouldn't require hundreds of thousands listeners.
My guess (because you didn't include a snippet of your JSON) is that you have a structure similar to this:
users
$uid
name: "user6155746"
flag: "no"
And you're attaching a listener in just the flag of each user with something like:
ref.child('users').on('child_added', function(userSnapshot) {
userSnapshot.ref().child('flag').on('value', function(flagSnapshot) {
console.log('the flag changed to '+flagSnapshot.val());
});
})
In code this is simple, in practice you'll have a hard time managing the "flag listeners". When will you remove them? Do you keep a list of them?
All of these things become a list simpler if you isolate the information that you're interested in in the JSON tree:
users
$uid
name: "user6155746"
userFlags
$uid: "no"
Now you can just listen on userFlags to see if the flag of any user has changed:
ref.child('userFlags').on('child_changed', function(userSnapshot) {
console.log('Flag of user '+userSnapshot.key()+' changed to '+userSnapshot.val());
});
With this you have a single listener, monitoring the flag of potentially hundreds of thousands of users.

Resources