I'm using pouchDB as a local database for an app. I want to query the results from PouchDB and load this into React.js. However, even though I'm using the waitFor() method the results of PouchDB query return too late. I think I don't understand the use of waitFor() correct, maybe someone can shed a light on it.
I have two stores, the DbStore that retrieves data from the datbase. And the FileExplorerStore this store is used by my react components.
DbStore.dispatchToken = AppDispatcher.register(function (payload) {
var action = payload.action;
var folder = payload.action.folder
switch (action.type) {
case 'OPEN_FOLDER':
if (folder === 'start') {
DbStore.init();
}
else {
DbStore.createPath(folder);
}
DbStore.emitChange();
break;
default:
// do nothing
}
return true;
});
The DbStore has a function LoadFiles that will load the DB files into the _files array. For illustrative purposes I've copied the code below:
loadFiles: function (_path) {
var fileNames = fs.readdirSync(_path);
_files = [];
fileNames.forEach(function (file) {
console.log(file)
db.query(function (doc) {
emit(doc.name);
}, {key: "bower.json"}).then(function (res) {
_files.push(res.rows[0].key)
});
});
},
The FileExplorerStore had a method to retrieve the files from the _files array. Then in the FileExplorerStore I have a getFiles() method, that will retrieve these files. However, this array is always empty because this method will be executed before the array is filled.
FileExplorerStore
FileExplorerStore.dispatchToken = AppDispatcher.register(function (payload) {
var action = payload.action;
switch (action.type) {
case 'OPEN_FOLDER':
AppDispatcher.waitFor([DbStore.dispatchToken]);
FileExplorerStore.emitChange();
break;
default:
// do nothing
}
return true;
});
In react.js the getInitialState function will call the getFiles() function from the FileExplorerStore to display the files.
How can I fix this or model this in a better way?
The waitFor in the dispatcher released by the Facebook team was not designed for that (at least the release on Sep 11, 2014), it just make sure the dispatchToken (which passed to waitFor) was executed and returned, and then it will starts executing the next registered callback.
So in your case this is somehow the correct expected behaviour.
What i will do is separate the action into two parts. First is fetching, second is OPEN_FOLDER as in FileExplorerStore. Assuming the DBfetch action named DB_FETCH, this will trigger your database and then get the data into _files, in the fetch success callback, trigger an action to the OPEN_FOLDER. For the trigger point, it is depends on you how you want to design it, i would have the third action named INIT_OPEN_FOLDER which trigger the DB_FETCH, then show the loading indicator to the UI, and finally when get the emit from OPEN_FOLDER, only display the data
Related
Coding hobbyist, recently went through a react basics course and now studying apis. In trying to work with both I've ran into a situation where the video course instructor says fetching the api would be best to be done in the useEffect since it's reaching outside the app and could have side effects. In trying to make these calls I have one api that could get away without a clean up return function - though I'd like to write one if that is best practice the second api certainly needs a clean up but I can't map over an array of objects to fetch from a key/property in each object. What returns as data looks like an object but what sets to state is an array of promises (do not have much experience with new Promise or async/await to write these correctly to get the data returned correctly, if this is part of the solution.
Things I'm especially confused about
First, the first useEffect clean up function was suppose to set the count state to 1, but when the component mounts, dismounts, and mounts again it still seems to enter the if block
Second, the second useEffect if you console log the fetch at the .then block with data you get the data object but finalCoinArr is just an array of Promises
Third, is the second useEffect clean up function going to set the gecko.data state back to an empty array? if so is there a way not to empty that array but after the first api call saving the data there tell it not to make another api call.
Here are a couple of the resources I've read
https://www.freecodecamp.org/news/async-await-javascript-tutorial/
https://beta.reactjs.org/learn/synchronizing-with-effects
Why useEffect running twice and how to handle it well in React?
function App() {
const [unsplash, setUnsplash] = React.useState({data:{urls:{full:'', regular:''},user:{name:'',portfolio_url:''}}})
const [gecko, setGecko] = React.useState({data:[]})
const [count, setCount] = React.useState(0);
const scrimbaUrl = `https://apis.scrimba.com/unsplash/photos/random?orientation=landscape&query=nature`
const coinGeckoUrl = `https://api.coingecko.com/api/v3/coins/`
const coinArr = [{name:'bitcoin', image:{small:'https://assets.coingecko.com/coins/images/1/small/bitcoin.png?1547033579'}},{name:'dogecoin', image:{small:"https://assets.coingecko.com/coins/images/5/small/dogecoin.png?1547792256"}},{name:'ethereum', image:{small:"https://assets.coingecko.com/coins/images/279/small/ethereum.png?1595348880"}}, {name:'litecoin', image:{small:"https://assets.coingecko.com/coins/images/2/small/litecoin.png?1547033580"}}]
React.useEffect(()=>{
if(count === 0){
fetch(scrimbaUrl)
.then(res=>res.json())
.then(data=>{
// console.log('called fetch')
setUnsplash((prevState)=>{
return {...prevState, data:data}
})
})
.catch((err)=>{
console.log(`ScrimbaE:${err}`)
let defaultBackground = 'https://images.unsplash.com/photo-1503264116251-35a269479413?crop=entropy&cs=tinysrgb&fm=jpg&ixid=MnwxNDI0NzB8MHwxfHJhbmRvbXx8fHx8fHx8fDE2NzExNTg5MTE&ixlib=rb-4.0.3&q=80'
let defaultName = 'Aperture Vintage'
let defaultPortfolio = 'https://creativemarket.com/PedroCS?u=PedroCS'
let defaultUnsplash = {urls:{full:defaultBackground}, user:{name:defaultName, portfolio_url:defaultPortfolio}}
setUnsplash((prevState)=>{
return {...prevState, data:defaultUnsplash}
})
})
}
return ()=>{
setCount((prevCount)=>prevCount+1)
}
},[])
React.useEffect(()=>{
if(count === 0){
let finalCoinArr = coinArr.map((ele)=>{
return fetch(`${coinGeckoUrl}${ele.name}`)
.then(res=>res.json())
.then(data=>{
return data
})
})
setGecko(()=>{
return {data:finalCoinArr}
})
console.log(finalCoinArr)
}
return ()=>{
setGecko(()=>{
return {data:[]}
})
}
},[])
// console.log(gecko)
/*Returned */
// {data: Array(4)}
// data: Array(4)
// 0: Promise {<fulfilled>: {…}}
// 1: Promise {<fulfilled>: {…}}
// 2: Promise {<fulfilled>: {…}}
// 3: Promise {<fulfilled>: {…}}
// length: 4
Every time a component mounts, it is a specific instance of that component. When it unmounts that instance gone forever, and if you re-mount it you get a brand new instance of that component, with all fresh original values. Nothing to do with the first instance. So it makes no sense to set state in a cleanup that runs on unmount, because when that instance unmounts it, along with all its internal state and everything, is discarded
The code in your second effect isn't how you do multiple API requests. The .map function is for creating a new array, not for looping over things. You can use it to create an array of Promises, then you can Promise.all for executing them all and getting their results:
// Here we use .map to create an array of promises and pass them to Promise.all
Promise.all(coinArr.map(coin => fetch(`${coinGeckoUrl}${ele.name}`)))
.then(results => {
// Here 'results' will be an array of all the responses
// Do with them whatever you like, like putting them into state
})
Again, data is not preserved between mounts. If you want to preserve data then you'll have to pull it out of the component and put it somewhere like in the web storage, or in a context.
I'm working on a Node app with Express. I'm chaining several http calls to data api's, each dependent on the previous req's responses.
It's all working except the last call. The last call needs to happen multiple times before the page should render.
Searching has turned up excellent examples of how to chain, but not make a call to the same API (or HTTP GET, data endpoint, etc.) with different params each time.
I'm trying to do something like this: Using a generator to call an API multiple times and only resolve when all requests are finished?
var getJSON = (options, fn) => {
.....
}
router.route("/")
.get((req, res) => {
var idArray = [];
var results = [];
getJSON({
.... send params here, (result) => {
//add response to results array
results.push(result);
//create var for data nodes containing needed id params for next call
let group = result.groupsList;
//get id key from each group, save to idArray
for(i=0;i<groups.length;i++){
idArray.push(groups[I].groupId);
}
//use id keys for params of next api call
dataCallback(idArray);
});
function dataCallback(myArray){
// number of ID's in myArray determine how many times this API call must be made
myArray.forEach(element => {
getJSON({
.... send params here, (result) => {
results.push(result);
});
// put render in callback so it will render when resolved
}, myRender());
};
function myRender() {
res.render("index", { data: results, section: 'home'});
}
})
I learned the problem with the above code.
You can call functions that are outside of the express route, but you can't have them inside the route.
You can't chain multiple data-dependent calls, not in the route.
Anything inside route.get or route.post should be about the data, paths, renders, etc.
This means either using an async library (which I found useless when trying to build a page from multiple data sources, with data dependent on the previous response), or having an additional js file that you call (from your web page) to get, handle and model your data like here: Using a generator to call an API multiple times and only resolve when all requests are finished You could also potentially put it in your app or index file, before the routes.
(It wasn't obvious to me where that code would go, at first. I tried putting it inside my router.post. Even though the documentation says "Methods", it didn't click for me that routes were methods. I hadn't really done more than very basic routes before, and never looked under the hood.)
I ended up going with a third option. I broke up the various API calls in my screen so that they are only called when the user clicks on something that will need more data, like an accordion or tab switch.
I used an XMLHttpRequest() from my web page to call my own front-end Node server, which then calls the third party API, then the front-end Node server responds with a render of my pug file using the data the API provided. I get html back for my screen to append.
In page:
callFEroutetoapi(_postdata, _route, function (_newdata){
putData(_newdata);
});
function putData(tData){
var _html = tData;
var _target = document.getElementById('c-playersTab');
applyHTML(_target, _html);
}
function callFEroutetoapi(data, path, fn){
//url is express route
var url = path;
var xhr = new XMLHttpRequest();
console.log('data coming into xhr request: ', data);
//xhr methods must be in this strange order or they don't run
xhr.onload = function(oEvent) {
if(xhr.readyState === xhr.DONE) {
//if success then send to callback function
if(xhr.status === 200) {
fn(xhr.response);
// ]console.log('server responded: ', xhr.response);
}
else {
console.log("Something Died");
console.log('xhr status: ', xhr.status);
}
}
}
xhr.onerror = function (){console.log('There was an error.', xhr.status);}
xhr.open("POST", url, true);
xhr.setRequestHeader("Content-Type", "application/json;charset=UTF-8");
xhr.send(JSON.stringify(data));
}
It adds an extra layer, but was necessary to show the latest, frequently changing data. It's also reusable which is better for a multiscreen web app. If there were fewer views (completely different screens and co-dependent datasets), a more centralized model.js file mentioned above would work better.
I'm trying to create a caching function in angular using RxJS Observable. Originally I've created this method using angularjs $q's deferred promise. Observables and RxJS are new to me and I find this method of working still somewhat confusing.
This is my current implementation of a getOrCreate caching function. Retrieve a single value for a key from storage (this.get()) and if it's not in there you retrieve it elsewhere (fetcher).
Assume fetcher is a slower data source than this.get(). Multiple requests for the same key could fire while we're still retrieving from this.get() so I put in an aggregator: only a single observable is created for multiple requests of the same key.
protected observableCache : {[key: string] : Observable<any>} = {};
get<T>(key : string): Observable<T> { /* Async data retrieval */ }
getOrCreate<T>(key : string, fetcher: () => Observable<T>) : Observable<T> {
const keyHash = this.hash(key);
// Check if an observable for the same key is already in flight
if (this.observableCache[keyHash]) {
return this.observableCache[keyHash];
} else {
let observable : Observable<T>;
this.get(key).subscribe(
// Cache hit
(result) => { observable = Observable.of(result); },
// Cache miss. Retrieving from fetching while creating entry
() => {
fetcher().subscribe((fetchedResult) => {
if(fetchedResult) {
this.put(key, fetchedResult);
}
observable = Observable.of(fetchedResult);
});
}
);
// Register and unregister in-flight observables
this.observableCache[keyHash] = observable;
observable.subscribe(()=> {
delete this.observableCache[this.hash(key)];
});
return observable;
}
}
This is my current version of that code but it doesn't look like I'm properly handling async code:
Observable will be returned before it's instantiated: return observable fires before observable = Observable.of(result);
There's probably a much better pattern of aggregating all requests for the same key while this.get() is still in-flight.
Can someone help with finding the Observer patterns should be used?
I think this might work. Rewritten as:
getOrCreate<T>(key : string, fetcher: () => Observable<T>) : Observable<T> {
const keyHash = this.hash(key);
// Check if an observable for the same key is already in flight
if (this.observableCache[keyHash]) {
return this.observableCache[keyHash];
}
let observable : ConnectableObservable<T> = this.get(key)
.catch(() => { // Catch is for when the source observable throws error: It replaces it with the new Rx.Observable that is returned by it's method
// Cache miss. Retrieving from fetching while creating entry
return this.fetchFromFetcher(key, fetcher);
})
.publish();
// Register and unregister in-flight observables
this.observableCache[keyHash] = observable;
observable.subscribe(()=> {
delete this.observableCache[keyHash];
});
observable.connect();
return observable;
},
fetchFromFetcher(key : string, fetcher: () => Observable<T>) : Observable<T> {
// Here we create a stream that subscribes to fetcher to use `this.put(...)`, returning the original value when done
return Rx.Observable.create(observer => {
fetcher().subscribe(fetchedResult => {
this.put(key, fetchedResult);
observer.next(fetchedResult);
},
err => observer.error(err),
() => observer.complete())
});
}
Explanations:
Observables are very different from promises. They are to work with async stuff, and there are some similarities, but they are quite different
As this.get(...) seems asynchronous, your let observable won't get filled until it yields a value, so when you assign it to your cache it's normal that's null.
A great thing about observables (and the main difference with promises) is that you can define a stream before anything gets executed. In my solution, nothing gets called until I call observable.connect(). This avoids so many .subscriptions
So, in my code I get the this.get(key) stream, and tell it that if it fails (.catch(...)) it must fetch the result, but once that's fetched then put it into your cache (this.put(key, fetchedResult)
Then I publish() this observable: This makes it so it behaves more like promises do, it makes it "hot". This means that all subscribers will get the values from the same stream, instead of creating a new stream that starts from 0 everytime one subscribes to it.
Then I store it in the observable pool, and set to delete it when it finishes.
Finally, I .connect(). This is only done if you publish() it, it's the thing that actually subscribes to the original stream, executing everything you want.
To make it clear, because this is a common error coming from Promises, in angular if you define a stream as:
let myRequest = this.http.get("http://www.example.com/")
.map((result) => result.json());
The request it's not sent yet. And everytime you do myRequest.subscribe(), a new request to the server is made, it won't reuse the "first subscription" result. That's why .publish() is very useful: It makes that when you call .connect() it creates a subscription that triggers the request, and will share the last result received (Observables support streams: Many results) with all incoming subscriptions to the published observable.
In my react App I have a input element. The search query should be memoized, which means that if the user has previously searched for 'John' and the API has provided me valid results for that query, then next time when the user types 'Joh', there should be suggestion for the user with the previously memoized values(in this case 'John' would be suggested).
I am new to react and am trying caching for the first time.I read a few articles but couldn't implement the desired functionality.
You don't clarify which API you're using nor which stack; the solution would vary somewhat depending on if you are using XHR requests or something over GraphQL.
For an asynchronous XHR request to some backend API, I would do something like the example below.
Query the API for the search term
_queryUserXHR = (searchTxt) => {
jQuery.ajax({
type: "GET",
url: url,
data: searchTxt,
success: (data) => {
this.setState({previousQueries: this.state.previousQueries.concat([searchTxt])
}
});
}
You would run this function whenever you want to do the check against your API. If the API can find the search string you query, then insert that data into a local state array variable (previousQueries in my example).
You can either return the data to be inserted from the database if there are unknowns to your view (e.g database id). Above I just insert the searchTxt which is what we send in to the function based on what the user typed in the input-field. The choice is yours here.
Get suggestions for previously searched terms
I would start by adding an input field that runs a function on the onKeyPress event:
<input type="text" onKeyPress={this._getSuggestions} />
then the function would be something like:
_getSuggestions = (e) => {
let inputValue = e.target.value;
let {previousQueries} = this.state;
let results = [];
previousQueries.forEach((q) => {
if (q.toString().indexOf(inputValue)>-1) {
result.push(a);
}
}
this.setState({suggestions: results});
}
Then you can output this.state.suggestions somewhere and add behavior there. Perhaps some keyboard navigation or something. There are many different ways to implement how the results are displayed and how you would select one.
Note: I haven't tested the code above
I guess you have somewhere a function that queries the server, such as
const queryServer = function(queryString) {
/* access the server */
}
The trick would be to memorize this core function only, so that your UI thinks its actually accessing the server.
In javascript it is very easy to implement your own memorization decorator, but you could use existing ones. For example, lru-memoize looks popular on npm. You use it this way:
const memoize = require('lru-memoize')
const queryServer_memoized = memoize(100)(queryServer)
This code keeps in memory the last 100 request results. Next, in your code, you call queryServer_memoized instead of queryServer.
You can create a memoization function:
const memo = (callback) => {
// We will save the key-value pairs in the following variable. It will be our cache storage
const cache = new Map();
return (...args) => {
// The key will be used to identify the different arguments combination. Same arguments means same key
const key = JSON.stringify(args);
// If the cache storage has the key we are looking for, return the previously stored value
if (cache.has(key)) return cache.get(key);
// If the key is new, call the function (in this case fetch)
const value = callback(...args);
// And save the new key-value pair to the cache
cache.set(key, value);
return value;
};
};
const memoizedFetch = memo(fetch);
This memo function will act like a key-value cache. If the params (in our case the URL) of the function (fetch) are the same, the function will not be executed. Instead, the previous result will be returned.
So you can just use this memoized version memoizedFetch in your useEffect to make sure network request are not repeated for that particular petition.
For example you can do:
// Place this outside your react element
const memoizedFetchJson = memo((...args) => fetch(...args).then(res => res.json()));
useEffect(() => {
memoizedFetchJson(`https://pokeapi.co/api/v2/pokemon/${pokemon}/`)
.then(response => {
setPokemonData(response);
})
.catch(error => {
console.error(error);
});
}, [pokemon]);
Demo integrated in React
I am trying to build an Angular project with Pusher using the angular-pusher wrapper. It's working well but I need to detect when the user loses internet briefly so that they can retrieve missed changes to data from my server.
It looks like the way to handle this is to reload the data on Pusher.connection.state('connected'...) but this does not seem to work with angular-pusher - I am receiving "Pusher.connection" is undefined.
Here is my code:
angular.module('respondersapp', ['doowb.angular-pusher']).
config(['PusherServiceProvider',
function(PusherServiceProvider) {
PusherServiceProvider
.setToken('Foooooooo')
.setOptions({});
}
]);
var ResponderController = function($scope, $http, Pusher) {
$scope.responders = [];
Pusher.subscribe('responders', 'status', function (item) {
// an item was updated. find it in our list and update it.
var found = false;
for (var i = 0; i < $scope.responders.length; i++) {
if ($scope.responders[i].id === item.id) {
found = true;
$scope.responders[i] = item;
break;
}
}
if (!found) {
$scope.responders.push(item);
}
});
Pusher.subscribe('responders', 'unavail', function(item) {
$scope.responders.splice($scope.responders.indexOf(item), 1);
});
var retrieveResponders = function () {
// get a list of responders from the api located at '/api/responders'
console.log('getting responders');
$http.get('/app/dashboard/avail-responders')
.success(function (responders) {
$scope.responders = responders;
});
};
$scope.updateItem = function (item) {
console.log('updating item');
$http.post('/api/responders', item);
};
// load the responders
retrieveResponders();
};
Under this setup how would I go about monitoring connection state? I'm basically trying to replicate the Firebase "catch up" functionality for spotty connections, Firebase was not working overall for me, too confusing trying to manage multiple data sets (not looking to replace back-end at all).
Thanks!
It looks like the Pusher dependency only exposes subscribe and unsubscribe. See:
https://github.com/doowb/angular-pusher/blob/gh-pages/angular-pusher.js#L86
However, if you access the PusherService you get access to the Pusher instance (the one provided by the Pusher JS library) using PusherService.then. See:
https://github.com/doowb/angular-pusher/blob/gh-pages/angular-pusher.js#L91
I'm not sure why the PusherService provides a level of abstraction and why it doesn't just return the pusher instance. It's probably so that it can add some of the Angular specific functionality ($rootScope.$broadcast and $rootScope.$digest).
Maybe you can set the PusherService as a dependency and access the pusher instance using the following?
PusherService.then(function (pusher) {
var state = pusher.connection.state;
});
To clarify #leggetters answer, you might do something like:
app.controller("MyController", function(PusherService) {
PusherService.then(function(pusher) {
pusher.connection.bind("state_change", function(states) {
console.log("Pusher's state changed from %o to %o", states.previous, states.current);
});
});
});
Also note that pusher-js (which angular-pusher uses) has activityTimeout and pongTimeout configuration to tweak the connection state detection.
From my limited experiments, connection states can't be relied on. With the default values, you can go offline for many seconds and then back online without them being any the wiser.
Even if you lower the configuration values, someone could probably drop offline for just a millisecond and miss a message if they're unlucky.