Can hackers impact information inside state? - reactjs

in my react application I calculate the price of the order in the back-end, and then transfer it to the state. But at the end, the paypal order amount is passed through the state. Which means, if a hacker can find a way to change the state to "$1", they can get the items cheaper.
This is just one case of me calculating stuff inside my state, and I was wondering if a scenario of hacker changing the state is possible.
One more case of me doing sensitive stuff with state :
When a user tries to reset password and their ip is not blacklisted for too many tries, I transfer them to a page where they need to enter the pin-code that they received to their phone. If they enter invalid pin I increase the "failedTries" state and won't accept their submission if they have failed 3 times. This is done instead of going all the way to the db and storing their failed pin codes. If a hacker changes the state to 0, they can simply brute force the phone pin which is only 6 digit long.

I think you should save failedTries in database not in UI part, as calculated price.
You should get the protected content from a server, and this server should only deliver the content when the user sends a valid token.
This way, yes, anyone can flip the switch in the client, but that only shows the UI components, without any data.
This is the usual approach when creating single-page applications. As long as you don't have secret or sensitive data right in your client from the beginning, they are as safe as your server / API that delivers the data.

Related

How to be sure that the react-redux app is rendered based on the latest request?

I have a react-redux application which:
Loads N records from the database depending on a "limit" query parameter (by default 20 records) on first application load (initialization)
Every 10 seconds app requests same (or newer) records from the database to update data in real time
If a user changes filters - app requests new records from the database according to the filter and re-renders app (+ changes interval to load data according to the filters)
If users scrolls down, the app automatically loads more records.
The problem is that if a user for and instance tries to filter something out and at this same time interval is loading more data, 2 requests can clash and overwrite each other. How in react-redux app I can be sure in a request sequence. Maybe there is a common approach on how to properly queue requests?
Thanks in advance!
I am not sure what you mean by 'clash'. My understanding is that the following will happen:
Assuming that both requests are successful, then data is retrieved for each of them, the redux state will be updated twice, and the component which renders the updated state will render twice (and the time passed between the two renders might be very short, which might not be very pleasant to the user)
If you want only one of these two requests to refresh the component, then a possible solution may be the following:
Each request starts, before retrieval of data from the database, by creating a 'RETRIEVAL_START' action. 'RETRIEVAL_START' will set a redux state variable 'retrievalInProgress'
If you want, in such a case, to get results only from the 1st of the two requests, you can check, before calling the action creator from the component, if 'retrievalInProgress' is on. If it is, don't call the action creator (in other words, do not request data when a request is in progress). 'retrievalInProgress' will be cleared upon successful or failed retrieval of data.
If you want to get results only from the 2nd of the two requests, then make 'retrievalInProgress' a counter, instead of a boolean. In the 'retrievalSuccess' action of the reducer, if this counter is higher than 1, it means that a new request already started. In this case, do not update the state, but decrement the counter.
I hope that this makes sense. I cannot be 100% sure that this works before I test it, which I am not going to do :), but this is the approach I would take.

REST API for a game

I am developing a web distributed application to let any user to play the Klondike (solitaire) game. I want to develop a API REST but I am not sure how the resources URL should look like.
At the server, I have the 'game', 'stock', 'waste', 'tableau', 'foundation' classes among others. The game class has the method moveFromStockToWaste, moveFromTableauToTableu... which implements the movements of the game.
What I have read about REST API is that the resources URL should look like a hierarchy of nouns while the operations (verbs) over this nouns are the HTTP methods (GET, PUT, POST, PATCH).
I am not sure if the way to move a card from stock to waste via API REST should look like this though moveFromTableauToTableau resource is a verb and not a noun:
UPDATE /player/{playerId}/game/{gameId}/moveFromTableauToTableau
Other way I have though is having this tableau piles cards resources:
URL: /player/{playerId}/game/{gameId}/TableauPile/1/
than in turn have resources like the number of not upturned cards and the upturned cards (all the information needed about the tableaus).
Then update this tableau pile resource by deleting the last card:
DELETE /player/{playerId}/game/{gameId}/TableauPile/1/upTurnedCard/3
And then put the card deleted in the target passing the new card suit and value:
POST /player/{playerId}/game/{gameId}/TableauPile/3/upTurnedCard
But this way the REST API would let to move a card from the tableau to the waste and this is not a valid movement.
I always think designing a REST API as a pretty uneasy task.
The second approach seems cleaner in naming convention terms, but I think you should never compromise the integrity of your targeted system to be compliant with such things. As you allow making one atomic operation in two http calls, it is in fact no longer atomic and you expose your system to unpredictable state in case of network failure or if any call fail for some reason. Avoid this kind of problem must be the top priority.
One idea could be thinking moves in terms of moves collection. So for a game you have a moves ressources. And then you can refine the move nature with some additional request parameters such like
POST/players/{playerId}/games/{gameId}/moves?type=TABLEAU_TO_TABLEAU
Body:
{
"src": "stock",
"dest": "waste"
}
This way you should have enough flexibilty to handle the different types of move.
Besides, may I suggest you to use plural form for resource naming :
player -> players
game -> games
so
GET /players naturally means give me all values of the player resource
GET /players/1 means give me the player of the players resource with the restriction playerId=1
I am developing a web distributed application to let any user to play the Klondike (solitaire) game. I want to develop a API REST but I am not sure how the resources URL should look like.
Oracle: how would you implement this as a website?
At the server, I have the 'game', 'stock', 'waste', 'tableau', 'foundation' classes among others. The game class has the method moveFromStockToWaste, moveFromTableauToTableu... which implements the movements of the game.
One important thing to realize is that the classes in your implementation don't matter; REST is about manipulating documents, the changes that happen to the game are side effects of manipulating the "documents".
In other words, the REST API is a mask that your game wears so that it looks like a web site.
See Jim Webber's talk DDD In the Large.
Klondike is effectively a state machine; any given tableau has some limited number of legal moves to make, each of which takes you to a new position.
So one way you might model the API is a representation of the tableau plus affordances (links) for each move, and the game progresses from one state to the next as you follow the links that describe a possible legal move.
There are "only" 8*10^67 or so deals to worry about, and for each of them you effectively have a graph of all of the reachable positions, and order them by traversal order, and then just link them all together.
/76543210987654321098765432109876543210987654321098765432109876543210/0
/76543210987654321098765432109876543210987654321098765432109876543210/1
/76543210987654321098765432109876543210987654321098765432109876543210/2
/76543210987654321098765432109876543210987654321098765432109876543210/3
And so on.
It's not an impossible arrangement, although it may be impractical, and since the URL describes the entire state of the game, the player has access to hidden state.
I'd suggest first trying this approach on something less complicated, like tic-tac-toe.
Hiding the state is relatively straight forward, because the mapping of the current game to a specific seed can be done on the server. That is, you send a POST to the start game end point, and that generates some random identifier, and maps the random identifier to a seed position, and off you go.
But a potential problem in this design is that HTTP is a stateless protocol; there's no way for the server to "know", when the player requests GET /games/000/152, that the client was previously in a position that could legally move to position 152. You can make the URI hard to guess, but that's about it.
What you likely want is the ability to ensure that the moves made by the player are legal, which means that the server needs to be tracking the current state of the game, and the player gets a view of the open information only.
The simplest HTML model of this would have the representation of the game show the information that the player is allowed, and a form with a list of the legal moves. The player selects one move and submits the form, which is a POST back to the game resource (directly back to the same resource, because we want the cache invalidation properties). Your implementation could then check that the received move is legal, refresh its own local state, and send an appropriate response.
That's the basic pattern we should be considering; GET the game, then send an unsafe request to modify the server's copy of the game.
The basic plan isn't all that different if you want to use a remote authoring approach. GET fetches a representation of the revealed information, the client makes legal edits to that representation, and PUTs the new representation to the same URL. The server verifies that the new position is reachable from the old position, and accepts the move, using its own copy of the hidden information to update the representation of the player's view.
(Pay careful attention to the meta data used in the response to PUT; the server is supposed to be communicating carefully whether the new representation is adopted as is, or if the server has transformed the proposed representation to make it consistent with the server's constraints).
You could, of course, also use PATCH to communicate the changes made to the representation by the client.
If messages were lost, or duplicated, the client's view and the server's view might not be aligned. So you may want to have your representation of the game include a clock/timer/turn number, so that the server can be certain that the players move is intended for the current state of the game.
EDIT As Roman notes, HTTP already has built into it the concept of validators, which allow you to lift data from your domain specific clock into the headers, so that generic components can understand and act appropriately to conditional requests
Another way of thinking about the game is to consider event sourcing; the client and the server are taking turns appending entries to a log, and the view of the game itself is computed by applying the events in the log. The client's moves would be limited to the set that manipulate the open information, the server's moves would reveal previously hidden information.
So you could use Atom Pub, or something very similar to it, to write new entries into the log. This in effect gives you two different representations of the game - the view, that shows you what you see when you look at the tableau, and the feed, which shows you the moves made to reach that point.
(If you squint, you'll see that this is really just a variation on "let the client pick a legal move".)
You could, I suppose, treat each of the elements in your domain model as a resource, and try to design an API to allow the client to manipulate those directly, but it isn't at all clear to me what benefit you get from that.

Coalescing Flux Actions

This is a detailed Flux Architecture question.
Say I have a resource that gets asynchronously prepared. Let's take User Data as an example.
There are multiple different ways to get this user data - in our example it may be that it requires a few different subsequent queries to generate from the server or is stored locally as a whole.
Case 1:
User data needs sequential steps. Fire USER_DATA_1_SUCESS, USER_DATA_2_SUCCESS. Other stores listen for USER_DATA_2_SUCCESS.
Case 2:
User Data is locally available as a whole. Fire a USER_DATA_READY action.
I'm trying to figure out how to go from a linear state completion (USER_DATA_2_SUCESS) to a resource ready event (USER_DATA_READY) in the stores. I can't call USER_DATA_READY directly from the stores - I get a can't call action in the middle of dispatch error. At the same time I want granularity - I want to control the different stages of putting the data together.
I'd like to have one way to condense these calls with good design. The option I can think of is:
Add a convenience 'Ready' function in a client class that is visible to the store. Call it with a tiny timeout in the stores callback for USER_DATA_2_SUCCESS.
Can anyone suggest a better flow?

ReactJS - props vs state + Store designing

Imagine the following:
you're writing a 'smart-house' application which manages a temperature in your house.
In my view I'd like to:
see current temperature for each room
set desired temperature for each room
see whether air conditioning is turned on/off for each room
There is an external device, communicating with your aplication via websockets (it is periodically sending current temperature, air conditioning status).
I see two options there:
1) Create one 'big' store containging data structures like:
var _data = [
name: 'Kitchen',
currentTemperature: 25,
desiredTemperature: 22,
sensors: [
{
name: 'Air Conditioning'
state: 'on'
}
... there might be other sensors too ...
]
]
There will be a TemperatureManager component (or something similar). It would have a state and fetch it from Store periodically.
Then it would just distribute part of the state to his descendants (ie RoomTemperatureManager, RoomSystemSensorManager), passing it as props.
If anything changes (for example, temperature in the bedroom), it will fetch all data from store and re-render its descendants if necessary.
2) The second solution is
to make RoomTemperatureManagers and RoomSystemSensorManagers have their own state. It is also related to having standalone stores for both Temperature and SystemSensorState.
Those Stores would then have parametrized getters (ie getSensorState(roomName)) instead of methods to fetch all data.
Question:
Which option is better?
Additional question:
Is it okay for leaf components (ie the one responsible for managing desired temperature) to call ActionCreator directly? Or maybe only the Supervising Component should know anything about ActionCreator and should pass proper method as a property to his descendants?
The 2 options you describe in your post are really 2 diffent questions:
One big store or several different stores?
Should child components (RoomTemperatureManagers) have their own state or receive props from store?
Ad 1. One big store is easier, as long as it does not get to complicated. Rule of thumb that I use: if your store has > 300 lines of code, probably better to separate in different stores.
Ad 2. Props are generally better than state. But in your case, I would think you will need state in e.g. your Temperature-manager: you set the temperature in your app (and want to see that reflected in some slider or whatever). For this, you will need state.
State updates the front-end immediately (optimistic update).
The component then sends of a set_temparature action to a sensor in the
house.
Then the sensor in the house confirms that it has set the new
temperature to your app.
The store(s) update(s) and emit change.
The temperature setting from the sensor in the house is communicated as
props to your Temperature manager.
The Temperature manager does whatever it needs to do (simply update state with new prop, display confirmation message, or nice error message if things broke down along the communication chain).
I believe the best approach would be to delegate. Your TemperatureManager should have a mechanism to notify listeners that there is an update and in turn, the Room(Temperature|System)Manager would send as a listener a callback that would consume the updated data and change the state accordingly. This will leverage the virtual DOM diff so only changes will be displayed and not whole parts re-rendered as well as create a single point that would communicate with the Store. Room(Temperature|System)Manager should only communicate with the Store if there is an update it needs to do to the model it is working with.
TemperatureManager could be improved if on subscribing, you can specify which data to listen to for updates. It should not assume that a particular 'manager' should get any subset of data.

How can I prevent database being written to again when the browser does a reload/back?

I'm putting together a small web app that writes to a database (Perl CGI & MySQL). The CGI script takes some info from a form and writes it to a database. I notice, however, that if I hit 'Reload' or 'Back' on the web browser, it'll write the data to the database again. I don't want this.
What is the best way to protect against the data being re-written in this case?
Do not use GET requests to make modifications! Be RESTful; use POST (or PUT) instead the browser should warn the user not to reload the request. Redirecting (using HTTP redirection) to a receipt page using a normal GET request after a POST/PUT request will make it possible to refresh the page without getting warned about resubmitting.
EDIT:
I assume the user is logged in somehow, and therefore you allready have some way of tracking the user, e.g. session or similar.
You could make a timestamp (or a random hash etc..) when displaying the form storing it both as a hidden field (just besides the anti Cross-Site Request token I'm sure you allready have there), and in a session variable (wich is stored safely on your server), when you recieve a the POST/PUT request for this form, you check that the timestamp is the same as the one in session. If it is, you set the timestamp in the session to something variable and hard to guess (timestamp concatenated with some secret string for instance) then you can save the form data. If someone repeats the request now you won't find the same value in the session variable and deny the request.
The problem with doing this is that the form is invalid if the user clicks back to change something, and it might be a bit to harsh, unless it's money you're updating. So if you have problems with "stupid" users who refresh and click the back-button thus accidentally reposting something, just using POST would remind them not to do that, and redirecting will make it less likely. If you have a problem with malicious users, you should use a timestampt too allthough it will confuse users sometimes, allthough if users is deliberately posting the same message over and over you probably need to find a way to ban them. Using POST, having a timestam, and even doing a full comparison of the whole database to check for duplicate posts, won't help at all if the malicious users just write a script to load the form and submit random garbage, automatically. (But cross-site-request protection makes that a lot harder)
Using a POST request will cause the browser to try to prevent the user from submitting the same request again, but I'd recommend using session-based transaction tracking of some kind so that if the user ignores the warnings from the browser and resubmits his query your application will prevent duplication of changes to the database. You could include a hidden input in the submission form with value set to a crypto hash and record that hash if the request is submitted and processed without error.
I find it handy to track the number of form submissions the user has performed in their session. Then when rendering the form I create a hidden field that contains that number. If the user then resubmits the form by pressing the back button it'll submit the old # and the server can tell that the user has already submitted the form by examining what's in the session to what the form is saying.
Just my 2 cents.
If you aren't already using some sort of session-management (which would let you note and track form submissions), a simple solution would be to include some sort of unique identifier in the form (as a hidden element) that is either part of the main DB transaction itself, or tracked in a separate DB table. Then, when you are submitted a form you check the unique ID to see if it has already been processed. And each time the form itself is rendered, you just have to make sure you have a unique ID.
First of all, you can't trust the browser, so any talk about using POST rather than GET is mostly nerd flim-flam. Yes, the client might get a warning along the lines of "Did you mean to resubmit this data again?", but they're quite possibly going to say "Yes, now leave me alone, stupid computer".
And rightly so: if you don't want duplicate submissions, then it's your problem to solve, not the user's.
You presumably have some idea what it means to be a duplicate submission. Maybe it's the same IP within a few seconds, maybe it's the same title of a blog post or a URL that has been submitted recently. Maybe it's a combination of values - e.g. IP address, email address and subject heading of a contact form submission. Either way, if you've manually spotted some duplicates in your data, you should be able to find a way of programmatically identifying a duplicate at the time of submission, and either flagging it for manual approval (if you're not certain), or just telling the submitter "Have you double-clicked?" (If the information isn't amazingly confidential, you could present the existing record you have for them and say "Is this what you meant to send us? If so, you've already done it - hooray")
I'd not rely on POST warnings from the browser. Users just click OK to make messages go away.
Anytime you'll have a request that needs to be one time only e.g 'make a payment', send a unique token down, that gets submitted back with the request. Throw the token out after it comes back, and so you can now tell when something is a valid submission (anything with a token that isn't 'active'). Expire active tokens after X amount of time, e.g. when a user session ends.
(alternately track the tokens that have come back, and if you have received it before then it is invalid.)
Do a POST every time you alter data, but never return an HTML response from a post... instead return a redirect to a GET that retrieves the updated data as a confirmation page. That way, there is no worry about them refreshing the page. If they refresh, all that will happen is another retrieve, never a data-altering action.

Resources