This question might should't be here, but I am struggle a little bit, how to structure graphql queries/mutations/fragments.(Using ReactJs/Apollo client) I have read many articles and at first tried to create queries near the component (Mostly near page root file) which at first looked ok, but then when you are adding more and more pages which contains some similar components and so on, it getting hard to keep track all the fields everywhere.
As a second option I tried to centralise all queries in one folder, started creating shared fragments, which seems a bit better approach, since you can only change fields only couple of places and it is more or less done. But this approach also getting complex time to time.
How to structure queries/mutations/fragments in a scaleable way? Any suggestion or any articles will be helpful.
Second, a bit more architectural question: Where do you transform data for view, is it better to normalize using apollo TypePolicy or in an higher order component. or sth else maybe ?
I know those are a bit broad questions and depends on use cases (What kind of app you are building and so on), but lets say you are building project, which needs many features to added on the go and it is not some simple presentational website.
Thank you, whoever suggests anything.
Related
I have an offline React web app where all data is stored locally in indexedDB. There is no server other than the file hosting for static assets. I am getting to the point where I am starting to look into using redux now but I am trying to understand the tradeoffs between moving more data into the store and continuing to rely on the DB. What's the idiomatic way of using a local db with redux?
Currently my app is composed of several container components that each fetch data from the db in componentWillMount. One option for integrating redux is to keep this mostly the same, with the only difference being the state is kept in the store and data is fetched using actions and thunks.
Alternately, I have seen lots of example code that loads all the data into the store at launch. This makes the whole app more deterministic, easier to test and reproduce. Switching between the main components would happen instantly (at the expense of initial app load). But I lose the benefits the DB provides like indexes and nice queries.
It seems like it would be unreasonable load literally the whole db into the store, at least in my case, that would be about 10MB of data, maybe more. So I will always have at least some components which will need to continue fetching their data on mount. But there's a subset of data which is central to the app and can be argued that table should be loaded in its entirety (this would be about 5,000 to 10,000 objects, probably).
What's the idiomatic way to work with local storage and redux? I get the sense that async fetches in componentWillMount is not idiomatic if it can be avoided. Even in instances where the state is small enough that it can be fully loaded into the store, is it worth giving up the benefits of a nice efficient query interface?
Edit: I should mention: I am using Dexie, which is a really, really wonderful library for working with indexedDB. It's fast, has a nice query interface, handles migrations etc... I'd really like to continue using Dexie unless there's a really strong reason to do otherwise.
For reference, here's a discussion on this topic on Dexie's github. The general take away form that is "it depends". Not quite the answer I was looking for, so I am hoping to get more insight if possible.
Answering this myself with what I've discovered so far. If I better answer comes along I'll be happy to mark it accepted.
TL;DR: It really does depend. The idiomatic way to do things is indeed to put as much in the state as long as makes sense. However, it is not un-idiomatic to asynchronously fetch data from elsewhere when it makes sense. Many applications would simply be impractical otherwise.
Dan Abramov's egghead tutorial (even titled "Building React Applications with Idiomatic Redux") goes with the approach of having all state in the store and persisting it a s one giant blob (or the relevant slice) periodically.
This has the advantage that you use the redux store as usual and persistence is essentially transparent. If your data is small enough where this isn't a problem this is great.
As #seanyesmunt noted, there is redux-persist, which basically does a more sophisticated version of this.
Of course, shortly after that he then rewrites the tutorial to fetch data from an API. At this point you can just pretend that API is IndexedDB instead, it's really no different. You lose some benefits of redux by not having completely deterministic state as a whole, but in many cases you have no choice.
I ended up doing essentially the same thing as Dexie's Redux sample code. Basically, using thunks to async write to the DB on certain actions.
EDIT 2020-12-18: I recommend using dexie-react-hooks and the useLiveQuery() hook. It's an amazing experience to work with takes away all complexity around this. See also this blog post about it.
My old answer was:
This question is of large interest to me as well as I have been elaborating with react and dexie for the last project I've been in. I am personally examining how graphql could address this scenario, but I am still learning. I hope to provide an example of a graphql/dexie. Of what I understand, graphql would act as the service layer and its "schema" (backend store) would use the dexie queries required to produce the more flat an simplistic graphql data requirements. I will be looking at some ready-to-use grapql sample from Apollo or Facebook to get started and I believe it will be simple to use.
I generally don't think it scales to read entire db into memory. And I believe application startup is crucial to end users so I really hope to find a perfect architecture for this. And currently my hope goes to Graphql.
I'm developing Web project with react.
But bigger code size the speed is slower. And All code in one system is easy to be complicated and hard to maintain
Is there a way develop by each component with redux, side effect all in one module?
For example, modulizing one component(comtainer) with is's action, stores, side effect. And attach main code with build system..
At the end of the day, a single store will contain all state in redux. The exception to this, is if you choose to run two seperate apps on the same page - but they wouldn't be linked in any way whatsoever (so ignore it).
However, you can use combine reducers to join reducers from multiple components into one store while keeping them separate. For the majority of apps this will suffice completely, and I would find it hard to imagine it would cause performance issues unless it is set up incorrectly.
Your question doesn't lend itself to one concrete answer, but rather patterns. I would look into "ducks" for redux - its not a technology or library, but rather a pattern for keeping your stores and components modular.
Ducks: https://github.com/erikras/ducks-modular-redux
Explanation: https://medium.com/#scbarrus/the-ducks-file-structure-for-redux-d63c41b7035c#.36dsdqd5q
Personal favourite structure doc: https://hashnode.com/post/tips-for-a-better-redux-architecture-lessons-for-enterprise-scale-civrlqhuy0keqc6539boivk2f
If you still feel like redux doesn't align to your modular app, you can consider not using it - sometimes there is no need for it. https://medium.com/#dan_abramov/you-might-not-need-redux-be46360cf367#.p7j6cioou
I've been looking at the Baobab library and am very attracted to the "single-tree" approach, which I interpret as essentially a single store. But so many Flux tutorials seem to advocate many stores, even a "store per entity." Having multiple stores seems to me to present all kinds of concurrency issues. My question is, why is single store a bad idea?
It depends on what want to do and how big is your project. There are a few reason why having several stores is a good idea:
If your project is not so small afterall you may end up with a huge 2000/3000lines store and you don't want that. That's the point of writing modules in general. You want to avoid files bigger than 1000lines (and below 500 is even nicer :) ).
Writing everything in one store makes that you can't enjoy the dependency management with the dispatcher using the waitFor function.It's gonna be harder to check dependencies and potential circular dependencies between your models (since they are all in one store). I would suggest you take a look at https://facebook.github.io/flux/docs/chat.html for that.
It's harder to read. With several store you can at one glance figure out what type of data you have and with a constant file for the dispatcher events you can see all your events.
So it's possible to keep everything in one store and it may work perfectly but if your project grows you may regret it badly and rewrite everything in several modules/store. Just my opinion I prefer to have clean modules and data workflows.
Hope it helps!
From my experience, working with a single store is definitely not a bad idea. It has some advantages, such as:
A single store to access all data can make it easier to query and make relationships about different pieces of data. Using multiple stores can make this a little bit more difficult (definitely not impossible though).
It will be easier to make atomic updates to the application state (aka data store).
But the way you implement the Flux pattern will influence on your experience with a single data store. The folks at Facebook have been experimenting with this and it seems like they encourage the use of a single data store with their new Relay+GraphQL stuff (read more about it here: http://facebook.github.io/react/blog/2015/02/20/introducing-relay-and-graphql.html).
I'm working on an app that needs to keep track of YouTube videos. I want to periodically pull the info on the relevant videos into Datomic and then serve them as embeds with titles, descriptions, etc. A naive way to do that would be to periodically fetch all the info I want and upsert it into my db.
But most of the time, the information won't have changed. Titles and descriptions can change (and I want to notice when they do), but usually they won't. Using the naive approach, I'd be updating entities with the same value over and over again.
Is that bad? Will I just fill up my storage with history? Will it cause a lot of reindexing? Or should I not worry about that, and let Datomic take care of itself?
A less-naive approach would look at the current values and see if they need updating. If that's a better idea, is there an easy way to do that, or should I expect to be writing a lot of custom code for it?
Upserting too often is definitely an issue for performance of the database. Yes, it will cause indexing issue, but also in terms of speed, its not an ideal solution.
If your app's performance has time as an important factor, I'd write custom code to check and then update if necessary
I am developing a complex wizard driven document creation application. I understand the initial domain's requirement and thus can create an explicit database model for this using explicit column names. Also I am a slight novice with MVC. I know that I will need to make the application more generic ie the Wizard will change, different attributes will need to be stored. My current view/instinct is to implement what is known at present using the most traditional techniques that the tools MVC/EF most closely support then refactor to support the more generic functionality at a later date using technologies such as XML features in SQL Server and WF Foundation etc. Doing all this now seems a big step.
So my question is about the virtue of keeping it simple to start with then refactor in the more sophisticated features later on, rather than building it generic to start with.
Thoughts and wisdom great appreciated.
Thanks.
I FEEL like in this situation (read OP comments), if you go for a simple "demo" version of your wizard with more hardcoded stuff then you will want in the end, you'll end up scrapping the demo instead of refactoring it. HOWEVER, I'm not saying it's a bad way to go.
From my point of view, theres 2 ways to aproach the developpement process of such an application.
The first one is doing a quick sketch version of the application as mentionned above. Doing so will make you realize the pros and cons of going in one or another direction, will make you realize things that has to be built one way rather then the other and all that kind of stuff. This is the "code monkey" method. Just type the damn code!
The second one is going into more of a UML route and doing diagrams of exactly what you want. However, without much experience in UML designing, this may end up as a huge waste of time since you will go on and make your application, thinking you going everything figured out, then get to writting the code and realize there is stuff you didnt account for. This path should be the best route to go but a lack of experience doing this might cost you time and money.