I've been looking at the Baobab library and am very attracted to the "single-tree" approach, which I interpret as essentially a single store. But so many Flux tutorials seem to advocate many stores, even a "store per entity." Having multiple stores seems to me to present all kinds of concurrency issues. My question is, why is single store a bad idea?
It depends on what want to do and how big is your project. There are a few reason why having several stores is a good idea:
If your project is not so small afterall you may end up with a huge 2000/3000lines store and you don't want that. That's the point of writing modules in general. You want to avoid files bigger than 1000lines (and below 500 is even nicer :) ).
Writing everything in one store makes that you can't enjoy the dependency management with the dispatcher using the waitFor function.It's gonna be harder to check dependencies and potential circular dependencies between your models (since they are all in one store). I would suggest you take a look at https://facebook.github.io/flux/docs/chat.html for that.
It's harder to read. With several store you can at one glance figure out what type of data you have and with a constant file for the dispatcher events you can see all your events.
So it's possible to keep everything in one store and it may work perfectly but if your project grows you may regret it badly and rewrite everything in several modules/store. Just my opinion I prefer to have clean modules and data workflows.
Hope it helps!
From my experience, working with a single store is definitely not a bad idea. It has some advantages, such as:
A single store to access all data can make it easier to query and make relationships about different pieces of data. Using multiple stores can make this a little bit more difficult (definitely not impossible though).
It will be easier to make atomic updates to the application state (aka data store).
But the way you implement the Flux pattern will influence on your experience with a single data store. The folks at Facebook have been experimenting with this and it seems like they encourage the use of a single data store with their new Relay+GraphQL stuff (read more about it here: http://facebook.github.io/react/blog/2015/02/20/introducing-relay-and-graphql.html).
Related
This question might should't be here, but I am struggle a little bit, how to structure graphql queries/mutations/fragments.(Using ReactJs/Apollo client) I have read many articles and at first tried to create queries near the component (Mostly near page root file) which at first looked ok, but then when you are adding more and more pages which contains some similar components and so on, it getting hard to keep track all the fields everywhere.
As a second option I tried to centralise all queries in one folder, started creating shared fragments, which seems a bit better approach, since you can only change fields only couple of places and it is more or less done. But this approach also getting complex time to time.
How to structure queries/mutations/fragments in a scaleable way? Any suggestion or any articles will be helpful.
Second, a bit more architectural question: Where do you transform data for view, is it better to normalize using apollo TypePolicy or in an higher order component. or sth else maybe ?
I know those are a bit broad questions and depends on use cases (What kind of app you are building and so on), but lets say you are building project, which needs many features to added on the go and it is not some simple presentational website.
Thank you, whoever suggests anything.
I have several views interacting with very similar state (slightly different subsets of a master). First instinct was to reuse a master, resetting it between views.
This approach does everything I need it to, but doesn't seem elegant. Disregarding details of how to actually do it, one would think of the state as owned by each view separately. I've made it global only because it made more sense to just go with the standard Redux approach than to spend a whole lot of time solving a problem that wasn't really a problem.
Duplicating a whole lot of reducers and action creators or inventing some heavy name-space abstraction would be even worse, but if the views end up diverging more over time, using the same master would become less efficient than it currently is. Not too worried about that actually happening, but I imagine it has bearing on the standard solution that I assume exists.
Am I missing a really sleek way of mostly-reusing these with little code duplication or complexity, or some other common solution? It seems unlikely that this really is the best way to handle it.
I understand that we encapsulate data to prevent things from being accessed that don't need to be accessed by developers working with my code. However I only program as a hobby and do not release any of code to be used by other people. I still encapsulate, but it mostly just seems to me like I'm just doing it for the sake of good policy and building the habit. So, is there any reason to encapsulate data when I know I am the only one who will be using my code?
Encapsulation not only about hiding data.
It is also about hiding details of implementation.
When such details are hidden, that forces you to use defined class API and the class is only who can change it inside.
So just imagine a situation, when you opened all methods to any class interested in them and you have a function that performs some calculations. And you've just realized that you want to replace it because the logic is not right, or you want to perform some complicated calculations.
In such cases sometimes you have to change all the places across your application to change the result instead of changing it in only one place, in API, that you provided.
So don't make everything public, it leads to strong coupling and pain during update process.
Encapsulation is not only creating "getters" and "setters", but also exposing a sort of API to access the data (if needed).
Encapsulation lets you keep access to the data in one place and allow you to manage it in a more "abstract" way, reducing errors and making your code more maintainable.
If your personal projects are simple and small, you can do whatever you feel like in order to produce fast what you need, but bear in mind the consequences ;)
I don't think unnecessary data access can happen only by third party developers. It can happen by you as well right? When you allow direct access to data through access rights on variables/properties, whoever is working with that, be it you, or someone else may end up creating bugs by accessing data directly.
I have an offline React web app where all data is stored locally in indexedDB. There is no server other than the file hosting for static assets. I am getting to the point where I am starting to look into using redux now but I am trying to understand the tradeoffs between moving more data into the store and continuing to rely on the DB. What's the idiomatic way of using a local db with redux?
Currently my app is composed of several container components that each fetch data from the db in componentWillMount. One option for integrating redux is to keep this mostly the same, with the only difference being the state is kept in the store and data is fetched using actions and thunks.
Alternately, I have seen lots of example code that loads all the data into the store at launch. This makes the whole app more deterministic, easier to test and reproduce. Switching between the main components would happen instantly (at the expense of initial app load). But I lose the benefits the DB provides like indexes and nice queries.
It seems like it would be unreasonable load literally the whole db into the store, at least in my case, that would be about 10MB of data, maybe more. So I will always have at least some components which will need to continue fetching their data on mount. But there's a subset of data which is central to the app and can be argued that table should be loaded in its entirety (this would be about 5,000 to 10,000 objects, probably).
What's the idiomatic way to work with local storage and redux? I get the sense that async fetches in componentWillMount is not idiomatic if it can be avoided. Even in instances where the state is small enough that it can be fully loaded into the store, is it worth giving up the benefits of a nice efficient query interface?
Edit: I should mention: I am using Dexie, which is a really, really wonderful library for working with indexedDB. It's fast, has a nice query interface, handles migrations etc... I'd really like to continue using Dexie unless there's a really strong reason to do otherwise.
For reference, here's a discussion on this topic on Dexie's github. The general take away form that is "it depends". Not quite the answer I was looking for, so I am hoping to get more insight if possible.
Answering this myself with what I've discovered so far. If I better answer comes along I'll be happy to mark it accepted.
TL;DR: It really does depend. The idiomatic way to do things is indeed to put as much in the state as long as makes sense. However, it is not un-idiomatic to asynchronously fetch data from elsewhere when it makes sense. Many applications would simply be impractical otherwise.
Dan Abramov's egghead tutorial (even titled "Building React Applications with Idiomatic Redux") goes with the approach of having all state in the store and persisting it a s one giant blob (or the relevant slice) periodically.
This has the advantage that you use the redux store as usual and persistence is essentially transparent. If your data is small enough where this isn't a problem this is great.
As #seanyesmunt noted, there is redux-persist, which basically does a more sophisticated version of this.
Of course, shortly after that he then rewrites the tutorial to fetch data from an API. At this point you can just pretend that API is IndexedDB instead, it's really no different. You lose some benefits of redux by not having completely deterministic state as a whole, but in many cases you have no choice.
I ended up doing essentially the same thing as Dexie's Redux sample code. Basically, using thunks to async write to the DB on certain actions.
EDIT 2020-12-18: I recommend using dexie-react-hooks and the useLiveQuery() hook. It's an amazing experience to work with takes away all complexity around this. See also this blog post about it.
My old answer was:
This question is of large interest to me as well as I have been elaborating with react and dexie for the last project I've been in. I am personally examining how graphql could address this scenario, but I am still learning. I hope to provide an example of a graphql/dexie. Of what I understand, graphql would act as the service layer and its "schema" (backend store) would use the dexie queries required to produce the more flat an simplistic graphql data requirements. I will be looking at some ready-to-use grapql sample from Apollo or Facebook to get started and I believe it will be simple to use.
I generally don't think it scales to read entire db into memory. And I believe application startup is crucial to end users so I really hope to find a perfect architecture for this. And currently my hope goes to Graphql.
I hope this question isn't too general or doesn't make sense.
I'm currently developing a basic application that talks to an SQLite database, so naturally I'm using the clojure.java.jdbc library (link) to interact with the DB.
The trouble is, as far as I can tell, the way you insert data into the DB using this library is by simply passing a map (e.g. {:id 1 :name "stackoverflow"} and a table name (e.g. :website)
The thing that that I'm concerned about is how can I make this more robust in the wider context of my application? What I mean by this is when I write data to the database and retrieve it, I want to use the same formatted map EVERYWHERE in the application, so from the data access layer (returning or passing in maps) all the way up to the application layer where it works on the data and passes it back down again.
What I'm trying to get at is, is there an 'idiomatic' clojure equivalent of JavaBeans?
The problem I'm having right now is having to repeat myself by defining maps manually with column names etc - but if I change the structure of my table in the DB, my whole application has to be changed.
As far as I know, there really isn't such a library. There are various systems that make it easier to write queries, but not AFAIK, anything that "fixes" your data objects.
I've messed around trying to write something like you propose myself but I abandoned the project since it became very obvious very quickly that this is not at all the right thing to do in a clojure system (and actually, I tend to think now that the approach has only very limited use even in languages that have really "fixed" data structures).
Issues with the clojure collection system:
All the map access/alteration functions are really functional. That
means that alterations on a map always return a new object, so it's
nearly impossible to create a forcibly fixed map type that's also
easy to use in idiomatic clojure.
General conceptual issues:
Your assumption that you can "use the same formatted map EVERYWHERE
in the application, so from the data access layer (returning or
passing in maps) all the way up to the application layer where it
works on the data and passes it back down again" is wrong if your
system is even slightly complex. At best, you can use the map from
the DB up to the UI in some simple cases, but the other way around is
pretty much always the wrong approach.
Almost every query will have its own result row "type"; you're
probably not going to be able to re-use these "types" across queries
even in related code.
Also, forcing these types on the rest of the program is probably
binding your application more strictly to the DB schema. If your
business logic functions are sane and well written, they should only
access as much data as they need and no more; they should probably
not use the same data format everywhere.
My serious answer is; don't bother. Write your DB access functions for the kinds of queries you want to run, and let those functions check the values moving in and out of the DB as much detail as you find comforting. Do not try to forcefully keep the data coming from the DB "the same" in the rest of your application. Use assertions and pre/post conditions if you want to check your data format in the rest of the application.
Clojure favour the concept of Few data structure and lots of functions to work on these few data structures. There are few ways to create new data structure (which I guess internally uses the basic data structures) like defrecord etc. But again if you are able to use them that won't really solve the problem that DB schema changes should effect the code less as you will eventually have to go through layers to remove/add the effects of the schema changes, because anywhere you are reading/creating that data that needs to be changed