How to mock out Firebase in Karma tests for Angular app - angularjs

By following the AngularFire guide, I have synchronized a scope variable with a Firebase array. My code is basically the same as the tutorial (Step 5):
https://www.firebase.com/docs/web/libraries/angular/quickstart.html
Everything in my app is working, but I am really confused about how to properly mock the Firebase call in my Karma unit tests. I guess something along the lines of using $provide to mock the data? But then $add wouldn't work in my controller methods. Help?

Copied from this gist which discusses this topic at length.
The discussion is all inlined in the comments. There is a lot to consider here.
What goes wrong with mocks
Consider this data structure, used for getting a list of names based on membership
/users/<user id>/name
/rooms/members/<user id>/true
Now let's create a couple simple classes without any real consideration for testing structures, assuming we'll use a mock of Firebase to test them. Note that I see these sorts of errors constantly in the wild; this example is not far fetched or exaggerated (it's rather tame in comparison)
class User {
// Accepts a Firebase snapshot to create the user instance
constructor(snapshot) {
this.id = snapshot.key;
this.name = snapshot.val().name;
}
getName() { return this.name; }
// Load a user based on their id
static load(id) {
return firebase.database().ref('users').child(uid)
.once('value').then(snap => new User(snap));
}
}
class MembersList {
// construct a list of members from a Firebase ref
constructor(memberListRef) {
this.users = [];
// This coupling to the Firebase SDK and the nuances of how realtime is handled will
// make our tests pretty difficult later.
this.ref = memberListRef;
this.ref.on('child_added', this._addUser, this);
}
// Get a list of the member names
// Assume we need this for UI methods that accept an array, so it can't be async
//
// It may not be obvious that we've introduced an odd coupling here, since we
// need to know that this is loaded asynchronously before we can use it.
getNames() {
return this.users.map(user => user.getName());
}
// So this kind of stuff shows up incessantly when we couple Firebase into classes and
// it has a big impact on unit testing (shown below)
ready() {
// note that we can't just use this.ref.once() here, because we have to wait for all the
// individual user objects to be loaded, adding another coupling on the User class's internal design :(
return Promise.all(this.promises);
}
// Asynchronously find the user based on the uid
_addUser(memberSnap) {
let promise = User.load(memberSnap.key).then(user => this.users.push(user));
// note that this weird coupling is needed so that we can know when the list of users is available
this.promises.push(promise);
}
destroy() {
this.ref.off('child_added', this._addUser, this);
}
}
/*****
Okay, now on to the actual unit test for list names.
****/
const mockRef = mockFirebase.database(/** some sort of mock data */).ref();
// Note how much coupling and code (i.e. bugs and timing issues we might introduce) is needed
// to make this work, even with a mock
function testGetNames() {
const memberList = new MemberList(mockRef);
// We need to abstract the correct list of names from the DB, so we need to reconstruct
// the internal queries used by the MemberList and User classes (more opportunities for bugs
// and def. not keeping our unit tests simple)
//
// One important note here is that our test unit has introduced a side effect. It has actually cached the data
// locally from Firebase (assuming our mock works like the real thing; if it doesn't we have other problems)
// and may inadvertently change the timing of async/sync events and therefore the results of the test!
mockRef.child('users').once('value').then(userListSnap => {
const userNamesExpected = [];
userListSnap.forEach(userSnap => userNamesExpected.push(userSnap.val().name));
// Okay, so now we're ready to test our method for getting a list of names
// Note how we can't just test the getNames() method, we also have to rely on .ready()
// here, breaking our effective isolation of a single point of logic.
memberList.ready().then(() => assertEqual(memberList.getNames(), userNamesExpected));
// Another really important note here: We need to call .off() on the Firebase
// listeners, or other test units will fail in weird ways, since callbacks here will continue
// to get invoked when the underlying data changes.
//
// But we'll likely introduce an unexpected bug here. If assertEqual() throws, which many testing
// libs do, then we won't reach this code! Yet another strange, intermittent failure point in our tests
// that will take forever to isolate and fix. This happens frequently; I've been a victim of this bug. :(
memberList.destroy(); // (or maybe something like mockFirebase.cleanUpConnections()
});
}
A better approach through proper encapsulation and TDD
Okay, now let's reverse this by starting with an effective test unit design and see if we can design our classes with less coupling to accommodate.
function testGetNames() {
const userList = new UserList();
userList.add( new User('kato', 'Kato Richardson') );
userList.add( new User('chuck', 'Chuck Norris') );
assertEqual( userList.getNames(), ['Kato Richardson', 'Chuck Norris']);
// Ah, this is looking good! No complexities, no async madness. No chance of bugs in my unit test!
}
/**
Note how our classes will be simpler and better designed just by using a good TDD
*/
class User {
constructor(userId, name) {
this.id = userId;
this.name = name;
}
getName() {
return this.name;
}
}
class UserList {
constructor() {
this.users = [];
}
getNames() {
return this.users.map(user => user.getName());
}
addUser(user) {
this.users.push(user);
}
// note how we don't need .destroy() and .ready() methods here just to know
// when the user list is resolved, yay!
}
// This all looks great and wonderful, and the tests are going to run wonderfully.
// But how do we populate the list from Firebase now?? The answer is an isolated
// service that handles this.
class MemberListManager {
constructor( memberListRef ) {
this.ref = memberListRef;
this.ref.on('child_added', this._addUser, this);
this.userList = new UserList();
}
getUserList() {
return this.userList;
}
_addUser(snap) {
const user = new User(snap.key, snap.val().name);
this.userList.push(user);
}
destroy() {
this.ref.off('child_added', this._addUser, this);
}
}
// But now we need to test MemberListManager, too, right? And wouldn't a mock help here? Possibly. Yes and no.
//
// More importantly, it's just one small service that deals with the async and external libs.
// We don't have to depend on mocking Firebase to do this either. Mocking the parts used in isolation
// is much, much simpler than trying to deal with coupled third party dependencies across classes.
//
// Additionally, it's often better to move third party calls like these
// into end-to-end tests instead of unit tests (since we are actually testing
// across third party libs and not just isolated logic)
//
// For more alternatives to a mock sdk, check out AngularFire.
// We often just used the real Firebase Database with set() or push():
// https://github.com/firebase/angularfire/blob/master/tests/unit/FirebaseObject.spec.js#L804
//
// An rely on spies to stub some fake snapshots or refs with results we want, which is much simpler than
// trying to coax a mock or SDK to create error conditions or specific outputs:
// https://github.com/firebase/angularfire/blob/master/tests/unit/FirebaseObject.spec.js#L344

Related

share() vs ReplaySubject: Which one, and neither works

I'm trying to implement short-term caching in my Angular service -- a bunch of sub-components get created in rapid succession, and each one has an HTTP call. I want to cache them while the page is loading, but not forever.
I've tried the following two methods, neither of which have worked. In both cases, the HTTP URL is hit once for each instance of the component that is created; I want to avoid that -- ideally, the URL would be hit once when the grid is created, then the cache expires and the next time I need to create the component it hits the URL all over again. I pulled both techniques from other threads on StackOverflow.
share() (in service)
getData(id: number): Observable<MyClass[]> {
return this._http.get(this.URL)
.map((response: Response) => <MyClass[]>response.json())
.share();
}
ReplaySubject (in service)
private replaySubject = new ReplaySubject(1, 10000);
getData(id: number): Observable<MyClass[]> {
if (this.replaySubject.observers.length) {
return this.replaySubject;
} else {
return this._http.get(this.URL)
.map((response: Response) => {
let data = <MyClass[]>response.json();
this.replaySubject.next(data);
return data;
});
}
}
Caller (in component)
ngOnInit() {
this.myService.getData(this.id)
.subscribe((resultData: MyClass[]) => {
this.data = resultData;
},
(error: any) => {
alert(error);
});
}
There's really no need to hit the URL each time the component is created -- they return the same data, and in a grid of rows that contain the component, the data will be the same. I could call it once when the grid itself is created, and pass that data into the component. But I want to avoid that, for two reasons: first, the component should be relatively self-sufficient. If I use the component elsewhere, I don't want to the parent component to have to cache data there, too. Second, I want to find a short-term caching pattern that can be applied elsewhere in the application. I'm not the only person working on this, and I want to keep the code clean.
Most importantly, if you want to make something persistent even when creating/destroying Angular components it can't be created in that component but in a service that is shared among your components.
Regarding RxJS, you usually don't have to use ReplaySubject directly and use just publishReplay(1, 10000)->refCount() instead.
The share() operator is just a shorthand for publish()->refCount() that uses Subject internally which means it doesn't replay cached values.

How can I persist redux state tree on refresh?

The first principle of Redux documentation is:
The state of your whole application is stored in an object tree within a single store.
And I actually thought that I understand all of the principles well.
But I'm now confused, what does application mean.
If application means just one of little complicated part in a website and works in just one page, I understand. But what if application means the whole website? Should I use LocalStorage or cookie or something for keeping the state tree? But what if the browser doesn't support LocalStorage?
I want to know how developers keep their state tree! :)
If you would like to persist your redux state across a browser refresh, it's best to do this using redux middleware. Check out the redux-persist and redux-storage middleware. They both try to accomplish the same task of storing your redux state so that it may be saved and loaded at will.
--
Edit
It's been some time since I've revisited this question, but seeing that the other (albeit more upvoted answer) encourages rolling your own solution, I figured I'd answer this again.
As of this edit, both libraries have been updated within the last six months. My team has been using redux-persist in production for a few years now and have had no issues.
While it might seem like a simple problem, you'll quickly find that rolling your own solution will not only cause a maintenance burden, but result in bugs and performance issues. The first examples that come to mind are:
JSON.stringify and JSON.parse can not only hurt performance when not needed but throw errors that when unhandled in a critical piece of code like your redux store can crash your application.
(Partially mentioned in the answer below): Figuring out when and how to save and restore your app state is not a simple problem. Do it too often and you'll hurt performance. Not enough, or if the wrong parts of state are persisted, you may find yourself with more bugs. The libraries mentioned above are battle-tested in their approach and provide some pretty fool-proof ways of customizing their behavior.
Part of the beauty of redux (especially in the React ecosystem) is its ability to be placed in multiple environments. As of this edit, redux-persist has 15 different storage implementations, including the awesome localForage library for web, as well as support for React Native, Electron, and Node.
To sum it up, for 3kB minified + gzipped (at the time of this edit) this is not a problem I would ask my team to solve itself.
Edit 25-Aug-2019
As stated in one of the comments. The original redux-storage package has been moved to react-stack. This approach still focuses on implementing your own state management solution.
Original Answer
While the provided answer was valid at some point it is important to notice that the original redux-storage package has been deprecated and it's no longer being maintained...
The original author of the package redux-storage has decided to deprecate the project and no longer maintained.
Now, if you don't want to have dependencies on other packages to avoid problems like these in the future it is very easy to roll your own solution.
All you need to do is:
1- Create a function that returns the state from localStorage and then pass the state to the createStore's redux function in the second parameter in order to hydrate the store
const store = createStore(appReducers, state);
2- Listen for state changes and everytime the state changes, save the state to localStorage
store.subscribe(() => {
//this is just a function that saves state to localStorage
saveState(store.getState());
});
And that's it...I actually use something similar in production, but instead of using functions, I wrote a very simple class as below...
class StateLoader {
loadState() {
try {
let serializedState = localStorage.getItem("http://contoso.com:state");
if (serializedState === null) {
return this.initializeState();
}
return JSON.parse(serializedState);
}
catch (err) {
return this.initializeState();
}
}
saveState(state) {
try {
let serializedState = JSON.stringify(state);
localStorage.setItem("http://contoso.com:state", serializedState);
}
catch (err) {
}
}
initializeState() {
return {
//state object
}
};
}
}
and then when bootstrapping your app...
import StateLoader from "./state.loader"
const stateLoader = new StateLoader();
let store = createStore(appReducers, stateLoader.loadState());
store.subscribe(() => {
stateLoader.saveState(store.getState());
});
Hope it helps somebody
Performance Note
If state changes are very frequent in your application, saving to local storage too often might hurt your application's performance, especially if the state object graph to serialize/deserialize is large. For these cases, you might want to debounce or throttle the function that saves state to localStorage using RxJs, lodash or something similar.
This is based on Leo's answer (which should be the accepted answer since it achieves the question's purpose without using any 3rd party libs).
I've created a Singleton class that creates a Redux Store, persists it using local storage and allows simple access to its store through a getter.
To use it, just put the following Redux-Provider element around your main class:
// ... Your other imports
import PersistedStore from "./PersistedStore";
ReactDOM.render(
<Provider store={PersistedStore.getDefaultStore().store}>
<MainClass />
</Provider>,
document.getElementById('root')
);
and add the following class to your project:
import {
createStore
} from "redux";
import rootReducer from './RootReducer'
const LOCAL_STORAGE_NAME = "localData";
class PersistedStore {
// Singleton property
static DefaultStore = null;
// Accessor to the default instance of this class
static getDefaultStore() {
if (PersistedStore.DefaultStore === null) {
PersistedStore.DefaultStore = new PersistedStore();
}
return PersistedStore.DefaultStore;
}
// Redux store
_store = null;
// When class instance is used, initialize the store
constructor() {
this.initStore()
}
// Initialization of Redux Store
initStore() {
this._store = createStore(rootReducer, PersistedStore.loadState());
this._store.subscribe(() => {
PersistedStore.saveState(this._store.getState());
});
}
// Getter to access the Redux store
get store() {
return this._store;
}
// Loading persisted state from localStorage, no need to access
// this method from the outside
static loadState() {
try {
let serializedState = localStorage.getItem(LOCAL_STORAGE_NAME);
if (serializedState === null) {
return PersistedStore.initialState();
}
return JSON.parse(serializedState);
} catch (err) {
return PersistedStore.initialState();
}
}
// Saving persisted state to localStorage every time something
// changes in the Redux Store (This happens because of the subscribe()
// in the initStore-method). No need to access this method from the outside
static saveState(state) {
try {
let serializedState = JSON.stringify(state);
localStorage.setItem(LOCAL_STORAGE_NAME, serializedState);
} catch (err) {}
}
// Return whatever you want your initial state to be
static initialState() {
return {};
}
}
export default PersistedStore;

Angular Service and Web Workers

I have an Angular 1 app that I am trying to increase the performance of a particular service that makes a lot of calculations (and probably is not optimized but that's besides the point for now, running it in another thread is the goal right now to increase animation performance)
The App
The app runs calculations on your GPA, Terms, Courses Assignments etc. The service name is calc. Inside Calc there are user, term, course and assign namespaces. Each namespace is an object in the following form
{
//Times for the calculations (for development only)
times:{
//an array of calculation times for logging and average calculation
array: []
//Print out the min, max average and total calculation times
report: function(){...}
},
//Hashes the object (with service.hash()) and checks to see if we have cached calculations for the item, if not calls runAllCalculations()
refresh: function(item){...},
//Runs calculations, saves it in the cache (service.calculations array) and returns the calculation object
runAllCalculations: function(item){...}
}
Here is a screenshot from the very nice structure tab of IntelliJ to help visualization
What Needs To Be Done?
Detect Web Worker Compatibility (MDN)
Build the service depending on Web Worker compatibility
a. Structure it the exact same as it is now
b. Replace with a Web Worker "proxy" (Correct terminology?) service
The Problem
The problem is how to create the Web Worker "Proxy" to maintain the same service behavior from the rest of the code.
Requirements/Wants
A few things that I would like:
Most importantly, as stated above, keep the service behavior unchanged
To keep one code base for the service, keep it DRY, not having to modify two spots. I have looked at WebWorkify for this, but I am unsure how to implement it best.
Use Promises while waiting for the worker to finish
Use Angular and possibly other services inside the worker (if its possible) again WebWorkify seems to address this
The Question
...I guess there hasn't really been a question thus far, it's just been an explanation of the problem...So without further ado...
What is the best way to use an Angular service factory to detect Web Worker compatibility, conditionally implement the service as a Web Worker, while keeping the same service behavior, keeping DRY code and maintaining support for non Web Worker compatible browsers?
Other Notes
I have also looked at VKThread, which may be able to help with my situation, but I am unsure how to implement it the best.
Some more resources:
How to use a Web Worker in AngularJS?
http://www.html5rocks.com/en/tutorials/workers/basics/
https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers#Worker_feature_detection
In general, good way to make a manageable code that works in worker - and especially one that also can run in the same window (eg. when worker is not supported) is to make the code event-driven and then use simple proxy to drive the events through the communication channel - in this case worker.
I first created abstract "class" that didn't really define a way of sending events to the other side.
function EventProxy() {
// Object that will receive events that come from the other side
this.eventSink = null;
// This is just a trick I learned to simulate real OOP for methods that
// are used as callbacks
// It also gives you refference to remove callback
this.eventFromObject = this.eventFromObject.bind(this);
}
// Object get this as all events callback
// typically, you will extract event parameters from "arguments" variable
EventProxy.prototype.eventFromObject = (name)=>{
// This is not implemented. We should have WorkerProxy inherited class.
throw new Error("This is abstract method. Object dispatched an event "+
"but this class doesn't do anything with events.";
}
EventProxy.prototype.setObject = (object)=> {
// If object is already set, remove event listener from old object
if(this.eventSink!=null)
//do it depending on your framework
... something ...
this.eventSink = object;
// Listen on all events. Obviously, your event framework must support this
object.addListener("*", this.eventFromObject);
}
// Child classes will call this when they receive
// events from other side (eg. worker)
EventProxy.prototype.eventReceived = (name, args)=> {
// put event name as first parameter
args.unshift(name);
// Run the event on the object
this.eventSink.dispatchEvent.apply(this.eventSink, args);
}
Then you implement this for worker for example:
function WorkerProxy(worker) {
// call superconstructor
EventProxy.call(this);
// worker
this.worker = worker;
worker.addEventListener("message", this.eventFromWorker = this.eventFromWorker.bind(this));
}
WorkerProxy.prototype = Object.create(EventProxy.prototype);
// Object get this as all events callback
// typically, you will extract event parameters from "arguments" variable
EventProxy.prototype.eventFromObject = (name)=>{
// include event args but skip the first one, the name
var args = [];
args.push.apply(args, arguments);
args.splice(0, 1);
// Send the event to the script in worker
// You could use additional parameter to use different proxies for different objects
this.worker.postMessage({type: "proxyEvent", name:name, arguments:args});
}
EventProxy.prototype.eventFromWorker = (event)=>{
if(event.data.type=="proxyEvent") {
// Use superclass method to handle the event
this.eventReceived(event.data.name, event.data.arguments);
}
}
The usage then would be that you have some service and some interface and in the page code you do:
// Or other proxy type, eg socket.IO, same window, shared worker...
var proxy = new WorkerProxy(new Worker("runServiceInWorker.js"));
//eg user clicks something to start calculation
var interface = new ProgramInterface();
// join them
proxy.setObject(interface);
And in the runServiceInWorker.js you do almost the same:
importScripts("myservice.js", "eventproxy.js");
// Here we're of course really lucky that web worker API is symethric
var proxy = new WorkerProxy(self);
// 1. make a service
// 2. assign to proxy
proxy.setObject(new MyService());
// 3. profit ...
In my experience, eventually sometimes I had to detect on which side am I but that was with web sockets, which are not symmetric (there's server and many clients). You could run into similar problems with shared worker.
You mentioned Promises - I think the approach with promises would be similar, though maybe more complicated as you would need to store the callbacks somewhere and index them by ID of the request. But surely doable, and if you're invoking worker functions from different sources, maybe better.
I am the author of the vkThread plugin which was mentioned in the question. And yes, I developed Angular version of vkThread plugin which allows you to execute a function in a separate thread.
Function can be defined directly in the main thread or called from an external javascript file.
Function can be:
Regular functions
Object's methods
Functions with dependencies
Functions with context
Anonymous functions
Basic usage:
/* function to execute in a thread */
function foo(n, m){
return n + m;
}
// to execute this function in a thread: //
/* create an object, which you pass to vkThread as an argument*/
var param = {
fn: foo // <-- function to execute
args: [1, 2] // <-- arguments for this function
};
/* run thread */
vkThread.exec(param).then(
function (data) {
console.log(data); // <-- thread returns 3
}
);
Examples and API doc: http://www.eslinstructor.net/ng-vkthread/demo/
Hope this helps,
--Vadim

Non-essential Angular promises

I'm trying to rewrite the code for http://m.amsterdamfoodie.nl in a more modern style. Basically single page Angular app downloads a set of restaurants with locations and places them on a map. If the user is the Amsterdam area then the user's location is added too, as are the distances to places.
At present I manage the asynchronous returns using a lot of if (relevant object from other async call exists) then do next step. I'd like to make more use of promises would be better.
So, flow control should be:
Start ajax data download, and geolocation call
if geolocation returns first, store coords for later
once ajax data is downloaded
if geolocation available
calculate distances to each restaurant, and pass control to rendering code
else pass control immediately to render code
if geolocation resolves later, calculate distances and re-render
The patterns I find on the internet assume that all async calls must return successfully before continuing, whereas my geolocation call can fail (or return a location far from amsterdam) and that's OK. Is there a trick I could use in this scenario or are the conditional statements really the way to go?
Every time you use .then, you essentially create a new promise based on the previous promise and its state. You can use that to your advantage (and you should).
You can do something along the lines of:
function getGeolocation() {
return $http.get('/someurl').then(
function resolveHandler(response) {
// $http.X resolves with a HTTP response object.
// The JSON data is on its `data` attribute
var data = response.data;
// Check if the data is valid (with some other function).
// By this, I mean e.g. checking if it is "far from amsterdam",
// as you have described that as a possible error case
if(isValid(data)) {
return data;
}
else {
return null;
}
},
function rejectHandler() {
// Handle the retrieval failure by explicitly returning a value
// from the rejection handler. Null is arbitrarily chosen here because it
// is a falsy value. See the last code snippet for the use of this
return null;
}
);
}
function getData() {
return $http.get('/dataurl').then(...);
}
and then use $q.all on both promises, which in turn creates a new promise that resolves as soon as all the given promises have resolved.
Note: In Kris Kowal's Q, which Angular's $q service is based on, you could use the allSettled method, which does almost the same as all, but resolves when all promises are settled (fulfilled or rejected), and not only if all promises are fulfilled. Angular's $q does not provide this method, so you can instead work your way around this by explicitly making the failed http request resolve anyways.
So, then you can do something like:
$q.all([getData(), getGeolocation()])
.then(function(data, geolocation) {
// `data` is the value that getData() resolved with,
// `geolocation` is the value that getGeolocation() resolved with.
// Check the documentation on `$q.all` for this.
if(geolocation) {
// Yay, the geolocation data is available and valid, do something
}
// Handle the rest of the data
});
Maybe I'm missing something... but since you have no dependencies between the two async calls, I don't see why you can't just follow the logic you outlined:
var geoCoordinates = null;
var restaurants = null;
var distances = null;
getRestaurantData()
.then(function(data){
restaurants = data;
if (geoCoordinates) {
distances = calculate(restaurants, geoCoordinates);
}
// set $scope variables as needed
});
getGeoLocation()
.then(function(data){
geoCoordinates = data;
if (restaurants){
distances = calculate(restaurants, geoCoordinates)
}
// set $scope variables as needed
});

Configure constant after config phase

I have an Angular constant which I need to configure after the config phase of the app.
It's basically a collection of endpoints, some of which are finalized after the app makes a few checks.
What I am currently doing is returning some of them as functions where I can pass in the part that changes: (Example code, not production code.)
angular.module('...',[]).constant('URL',(function()
{
var apiRoot='.../api/'
return {
books:apiRoot+'books',//No env needed here - Property (good)
cars:function(env){//But needed here - Method (bad, inconsistent)
return env+apiRoot+'/cars';
}
};
}()));
But that's rather inefficient because I only need to compile that URL once, not each time I need it.
URL.books
URL.cars('dev');
I was thinking of turning it into an Angular provider and configure it prior to instantiation, but I don't know if it's possible to configure it outside the config block, when it would be too early because I don't have env yet for example.
How can I do it?
You could use a promise for each entry (books,cars,...). When you have all info (e.g. env) for the cars entry, you can resolve the promise.
angular.module('...',[]).constant('URL',(function()
{
var apiRoot='.../api/'
var carsPromise = function() {
var deferred = $q.defer();
getEnv().then(function(env) { // getEnv also returns a promise
deferred.resolve(env+apiRoot+'/cars');
});
return deferred.promise;
}
return {
books: instantlyResolvedPromise(apiRoot+'books'),
cars: carsPromise();
};
}()));
Of course, this adds some complexity as promises tend to spread like a disease but you will end up with a consistent api.

Resources