When I run a .fetch() command, it first returns null, then say suppose I have 100 documents and it will keep on loading from 1 to 100 and the counter keeps updating from 1 to 100 progressively. I don't want that to happen. I want all the results to be displayed at once after the fetch process has been completed.
Also, how can I display a relevant message to the user if no documents exist? The fetch method doesn't work for me since it returns 0 at first and hence "No document found" flashes for a second.
dbName.find({userID:"234234"}).fetch()
Even though the above has 100 docs, it first shows null and then keep loading the documents one by one. I want it load all at once or just return something if no docs exist.
I don't want that to happen. I want all the results to be displayed at once after the fetch process has been completed
To really obtain all documents at once on the client you will have to write a Meteor Method that returns all the documents:
Meteor.methods({
'allDocs' () {
return dbName.find({userID:"234234"}).fetch()
}
})
Note, that you have to call fetch on the cursor to return the documents, otherwise you will face an "unhandled promise rejection error".
Then call it from the client as usually. You can even add the documents to your client side local collection without affecting allow/deny (which should be off / deny all by default):
Meteor.call('allDocs', (err, documents) => {
// ... handle err
// all client collections have a local collection accessible via ._collection
const localCollection = dbName._collection
documents.forEach(doc => localCollection.insert(doc))
})
Advantages:
Returns all documents immediately
Less resources consumed (no publication observers required)
Works with caching tools, such as ground:db, to create offline-first applications
Disadvantages:
You should limit the query and access to your collections using Methods as much as possible, (using mdg:validated-method) which can require much more effort than shown in this examples
Not reactive! If you need reactivity on the client you need to include Tracker and reactive data-sources (ReactiveVar etc.) to provide some decent reactive user experience
Manual syncing can become frustrating and is error prone
Your question is actually about the subscription and it's state of readiness. While it is not yet ready, you can show a loading page, and once it is you can run the .fetch() to get the whole array. This logic could be put in your withTracker call, e.g.:
export default withTracker((props) => {
const sub = Meteor.subscribe('users');
return {
ready: sub.ready(),
users: sub.ready() && Users.find({userID: props.userID}).fetch()
};
})(UserComponent);
Then, in your component, you can decide whether to render a spinner (while ready == false), or the users.
Your question is not entirely clear to me in terms of tools etc (please state which database connector lib are you using), but firstly, given you're doing a database access, most likely, your ".fetch()" call is not a sync function but async, and also most likely, handled by a promise.
Secondly, given that you're using react, you want to set a new state only after you get all the results back.
If fetch is a promise then just do:
dbName.find({userID:"234234"}).fetch().then(results =>
setState({elements:results.data}) // do your processing accordingly
}
By only calling setState inside the promise, you'll always have all the results fetched at that instant and only with that do you update your component state with the setState function - either using your react component class this.setState or with hooks like useState (much cleaner).
Related
I wanted to get your opinion on something.
I'm trying to understand how a subscription works. however, I couldn't find a way to pull an array of objects in a subscription. for example, if I use createMany, I can not return all the result via subscription.
The second issue is if I return a single item for example if it's a new item, I have to "manually (air quote)" add that item to the list that is already displayed. But this feels to me I don't actually display real-time true data.
So my question is using something like
useEffect(() => {
// refetching original query when subscription is triggered
refetch();
}, [updatedNotificationData]);
would there be any downside like hitting up the server more than I should? let's say every time there is a refetching happens I might be pulling thousands of notifications (I know there is caching but still) or is there a better way to deal with bringing new data.
Also, I tried adding subscribed data to the original list but for some reason react adds 2 of the same item every time.
Thanks in advance if you can give me in the right direction.
if I use createMany, I can not return all the result via subscription.
That shouldn't be a problem if you define the return type of the subscription as array.
type Subscription{
onChange:[ObjectType]
}
It would allow you to avoid fetching again but updating cache can get a bit complicated.
Also, I tried adding subscribed data to the original list but for some reason react adds 2 of the same item every time.
In case you are using the the subscribeToMore method it's not really reacts fault but the way how the updateQuery method works: github issue regarding this.
My workaround was to subscribe via the useSubscription hook and handle the cache modifications inside the onSubscriptionData callback apollo documentation and also setting the useQuery hooks skip option once I get the data so it wont query on each rerender.
I have a react-redux application which:
Loads N records from the database depending on a "limit" query parameter (by default 20 records) on first application load (initialization)
Every 10 seconds app requests same (or newer) records from the database to update data in real time
If a user changes filters - app requests new records from the database according to the filter and re-renders app (+ changes interval to load data according to the filters)
If users scrolls down, the app automatically loads more records.
The problem is that if a user for and instance tries to filter something out and at this same time interval is loading more data, 2 requests can clash and overwrite each other. How in react-redux app I can be sure in a request sequence. Maybe there is a common approach on how to properly queue requests?
Thanks in advance!
I am not sure what you mean by 'clash'. My understanding is that the following will happen:
Assuming that both requests are successful, then data is retrieved for each of them, the redux state will be updated twice, and the component which renders the updated state will render twice (and the time passed between the two renders might be very short, which might not be very pleasant to the user)
If you want only one of these two requests to refresh the component, then a possible solution may be the following:
Each request starts, before retrieval of data from the database, by creating a 'RETRIEVAL_START' action. 'RETRIEVAL_START' will set a redux state variable 'retrievalInProgress'
If you want, in such a case, to get results only from the 1st of the two requests, you can check, before calling the action creator from the component, if 'retrievalInProgress' is on. If it is, don't call the action creator (in other words, do not request data when a request is in progress). 'retrievalInProgress' will be cleared upon successful or failed retrieval of data.
If you want to get results only from the 2nd of the two requests, then make 'retrievalInProgress' a counter, instead of a boolean. In the 'retrievalSuccess' action of the reducer, if this counter is higher than 1, it means that a new request already started. In this case, do not update the state, but decrement the counter.
I hope that this makes sense. I cannot be 100% sure that this works before I test it, which I am not going to do :), but this is the approach I would take.
In several blogs video and so on, there's a CRUD tutorial with Redux.
None of them (AFAIK after surfing) deal with fully async API on servers, like a fire-and-forget behavior.
Main commands in a CQRS environment deal frequently with those kind of fire-and-forget.
Let's take a fictive example of Twitter to easily get the idea:
Basically, in a context of a synchronous CRUD API, you likely have:
Redux Action: POST_TWEET
Server API: returning the entire created tweet in the response data.
State: TweetReducer exploring and storing the created tweet's data from the response.
UI: listening to the new tweet from the Tweet state directly.
The TweetReducer, besides the classical fetching APIs, can fully handle the POST_TWEET action since it can encompass the new tweet directly.
However, in a context of fire-and-forget API on the server:
Redux Action: POST_TWEET
Server API: returning only the tweet's id (e.g Location) in the response.
State: TweetReducer does not handle the creation since the tweet is not available at the time of the success action triggering.
Thus a new Redux state dedicated to handle tweet creation labeled TweetCreation typically owning those properties: (data: {id: string}, inProgress: boolean, errors: []).
It would then grab the newly created tweet's id in the data and allow UI to listen to this state (TweetCreation).
UI: listening to TweetCreation state and hence displays that the tweet is sent or even tries to fetch the server at some time interval to get the full tweet.
Is it a good practice some people experiment to add another state on the Redux store to deal with fire-and-forget APIs?
Is it "conventional" in the community or is there another clever way?
1. Creating a separate state for pending tweets
For a start, you'd need to change your TweetCreation to an array in case the user makes a second tweet before the first is confirmed.
So your shape would look like this: { pendingTweets: [], confirmedTweets: [] }.
In POST_TWEET, you append the new tweet to pendingTweets.
In SET_TWEET_ID, you remove the matching tweet from pendingTweets and push it to confirmedTweets.
In your component, you probably do something like confirmedTweets.concat(pendingTweets).map(...).
2. Use the same state for pending tweets
The shape will just be { tweets: [] }.
In POST_TWEET, you append the new tweet to tweets.
In SET_TWEET_ID, you update the matching tweet in tweets.
In your component, you do tweets.map(...).
Conclusion
Using the same state for pending tweets seems like a simpler (and therefore better) approach.
Additional considerations (for both approaches)
I left out details about avoiding direct state mutations when updating since that's very basic.
You probably need to do something like generating a temporary id for the pending tweet and sending it back from the server so that you can find the matching tweet in SET_TWEET_ID.
The temporary id can use a different object key (or use an additional flag) so that you can distinguish between a pending and a confirmed tweet in the component (eg. to render a loading icon beside pending tweets).
Replacing [] with {} using id as object key might be better (depending on the exact requirements) but that's not the focus of this question.
I have a large app with some autocomplete text-inputs that retrieve search suggestions from the backend on every key stroke.
I want to save recent search query results to avoid multiple backend calls for the same query if the user deletes characters. I also want to expire these queries after some period of time to keep search results fresh.
I also want to have a loading indicator for the current text input if the backend call isn't complete yet for that exact value.
The question is - where to manage this state and the actions associated with this state (below I have a sample state shape I need).
Managing it in the main Redux store looks like an overkill - this state is kind of temporary-intermediate state, it's very short-lived (queries should be expired after a while), there may be many instances of the same Component present on the screen, and different instances might use different backend calls.
Managing the recent search-queries and search-results in the local React Component state object - looks like a fine solution.
But now we have the backend calls which I don't want to fire from within the component, but go through the full-blown Flux process, with proper actions and reducers and passing the results (or errors) into props through the Store Connector.
So eventually, things here don't fit properly with each other - I don't want to manage the state in the main Redux store but I do want the backend-calls (results of which are the main part of that state) to go through the main reducers+store lifecycle.
Any advice for a robust, maintainable, and easy-to-figure-out-without-docs architecture is appreciated.
The state I need for every instance of this component looks something like:
(let's say I typed in dog, and the last result didn't come yet):
{
currentSearchInput: 'dog',
recentQueries: [
{
input: 'd',
isLoading: false,
results: [...]
},
{
input: 'do',
isLoading: false,
results: [...]
},
{
input: 'dog',
isLoading: true,
results: null
},
]
}
In my opinion, using the object structure noted above with the components internal state is just fine. Write a function that will accept the results of the HTTP Request that come back through Redux, and update the results field of the appropriate object in the array. Personally, I would store the promise in the results field. Each time componentWillReceiveProps gets called (due to your reducers returning new redux state) use a setState to update your recentQueries.
I'd also set a limit. 5 recent query max or something like that.
That being said, I use the google places API to return address/establishment suggestions and the data is so small it wasn't worth it to do this. Most of the results are under 10kb.
EDIT:
Based on our discussions in the comments, here is what I would suggest doing.
In the onChange handler for the input.
Pass the search criteria to the action as you normally would
In the action creator, return the results of the API call AND the search criteria
In the reducer
Check to see the length of the array that holds your search calls (the redux store value). If it is less than five, simply concat the current values and the new values and return. If its value, overwrite the 0th position of the array with the new content. e.g.
state[0] = action.newApiResult; return state;
The question I have is about how to store a collection of models to a RESTful back end.
I'm using Backbone.js, and I'm considering to either:
Use Async.js parallel method and post / put each model separately in a loop, after which a general callback method is triggered;
Send a collection of objects to the back end, and use a database transaction to make sure that all models are properly saved with a single commit;
The first method seems to cause a lot of overhead, because I have to make different calls to save the models.
But when considering the second approach, Laravel4 does not by default allows to perform a post / put on a collection.
What would be your preferred approach, and more importantly, why?
The second approach is definitely the way to go in my opinion, it reduces the latency and saves bandwidth if you use it with a large number of models since all the HTTP stuff (headers, etc) will be sent only once.
In your controller, use something like this (it may not work, I don't have access to any Laravel installation to test it) :
public function postCollection() {
$collection = Input::get("collection");
DB::transaction(function() {
foreach ($collection as $data) {
// In this example we assume it's a collection of users
// Of course in a real app you would also do input validation
$user = User::create(["name" => $data->name, "email" => $data->email]);
}
})
// Example success response, will be automatically serialized to JSON
return ["status" => "success"];
}
This loops over the collection element of your JSON input, which should be a list of models. Then it should obviously do validation and possibly other stuff. The whole loop is wrapped into a DB::transaction(), which will rollback everything if an exception occurs inside of it.