Exclude items from first-level filter when children-levels are empty at Loopback find() - angularjs

When using Strongloop Loopback, we can make a data request (with relations) to the database these ways:
(1) Using lb-service (at front-end)
Model.find({
filter: {
where: {id: 1},
include: {
relation: 'relationship',
scope: {where: {id: 2}}
}
}
}, function (instances) {
}, function (err) {
});
(2) Using node.js (at server-side)
Model.find({
where: {id: 1},
include: {
relation: 'relationship',
scope: {where: {id: 2}}
}
}, function (err, instances) {
});
What I need: Exclude items from first filter whether another filter fails.
There is one obvious solution: filtering the response, this way:
instances = instances.filter(function(instance){
return typeof(instance.relationship) !== "undefined";
});
But... Using filter() to eliminate is not a good scalable solution, because it will always iterate over the array. Using this solution at the front-end is not good, because the size of the array will slow down the performance. Bringing it to the server-side could be a solution. But... each model will have a particular set of relations... and it is not scalable again!
Main question: Is there some way to overcome this situation, excluding items from the first filter whether second (third, or more) fails simultaneously (or not)?
Something like, defining it on filter object:
var filter = {
where: {id: 1},
include: {
relation: {name: 'relationship', required: true}, // required means this filter *needs* to be satisfied
scope: {where: {id: 2}}
}
};
Requirements:
(1) SQL query is not an option ;)
(2) I am using MySQL as database. So things like
{ where: { id: 1, relationship.id: 2 } }
will not work as desired.

I don't know of a way to do this within the filter syntax itself. I think you would have to write a custom remote method to do the filtering yourself after the initial query was complete. Here's what that might look like:
// in /common/models/model.js
Model.filterResults = function filterResults(filter, next) {
Model.find(filter, function doFilter(err, data) {
if (err) { return next(err); }
var filteredData = data.filter(function(model) {
return model.otherThings && model.otherThings().length;
});
next(null, filteredData);
});
};
Model.remoteMethod(
'filterResults',
{
accepts: { arg: 'filter', type: 'object', http: { source: 'query' } },
returns: { arg: 'results', type: 'array' },
http: { verb: 'get', path: '/no-empties' }
}
);
Now you can hit: .../api/Models/no-empies?filter={"include":"otherThings"} and you will only get back Models that have a related OtherThing. Note that this is for a one-to-many relationship, but hopefully you can see how to change it to fit your needs.

Related

Kendo DataSource reading from Async/await method which uses Axios to fetch data

Using React with TypeScript
Please could somebody provide an example of how I might be able to use a Kendo DataSource to read from a method which internally uses Axios to prod an external API for JSON data..? I must have flown through 20 different versions of this code trying different approaches, nothing seems to fit...
All I'm trying to do currently is supply a Kendo ComboBox with an array of {id: number, name: string}
Very basic stuff at the moment, but I do have to use a similar approach to this later on with a Kendo Grid which handles server side sorting and pagination so I'd like to get this working now then that should be somewhat easier later on...
The reason I want to use Axios is because I've written an api.ts file that appends appropriate headers on the gets and posts etc and also handles the errors nicely (i.e. when the auth is declined etc...)
A basic example of what I'm trying, which isn't working is this: -
public dataSource: any;
constructor(props: {}) {
super(props);
this.dataSource = new kendo.data.DataSource({
type: "odata",
transport: {
read: function() {
return [{ id: 1, name: "Blah" }, { id: 2, name: "Thing" }];
}.bind(this)
},
schema: {
model: {
fields: {
id: { type: "number" },
name: { type: "string" }
}
}
}
});
}
<ComboBox
name="test"
dataSource={this.dataSource}
placeholder={this.placeholder}
dataValueField="id"
dataTextField="name"
/>
Anybody got any thoughts on this please? :)
Easy fix in the end...
this.dataSource = new kendo.data.DataSource({
transport: {
read: function(options: any) {
options.success([{ id: 1, name: "Blah" }, { id: 2, name: "Thing" }]);
}.bind(this)
},
schema: {
model: {
fields: {
id: { type: "number" },
name: { type: "string" }
}
}
}
});
2 things were wrong..
Removed the type: "odata",
and
Added the usage of options in
All working fine now with the async await function also, just passing the data into the options.success in the .then on the promise. Job done :-)

How can I get an item in the redux store by a key?

Suppose I have a reducer defined which returns an array of objects which contain keys like an id or something. What is the a redux way of getting /finding a certain object with a certain id in the array. The array itself can contain several arrays:
{ items:[id:1,...],cases:{...}}
What is the redux way to go to find a record/ node by id?
The perfect redux way to store such a data would be to store them byId and allIds in an object in reducer.
In your case it would be:
{
items: {
byId : {
item1: {
id : 'item1',
details: {}
},
item2: {
id : 'item2',
details: {}
}
},
allIds: [ 'item1', 'item2' ],
},
cases: {
byId : {
case1: {
id : 'case1',
details: {}
},
case2: {
id : 'case2',
details: {}
}
},
allIds: [ 'case1', 'case2' ],
},
}
Ref: http://redux.js.org/docs/recipes/reducers/NormalizingStateShape.html
This helps in keeping state normalized for both maintaining as well as using data.
This way makes it easier for iterating through all the array and render it or if we need to get any object just by it's id, then it'll be an O(1) operation, instead of iterating every time in complete array.
I'd use a library like lodash:
var fred = _.find(users, function(user) { return user.id === 1001; });
fiddle
It might be worth noting that it is seen as good practice to 'prefer objects over arrays' in the store (especially for large state trees); in this case you'd store your items in an object with (say) id as the key:
{
'1000': { name: 'apple', price: 10 },
'1001': { name: 'banana', price: 40 },
'1002': { name: 'pear', price: 50 },
}
This makes selection easier, however you have to arrange the shape of the state when loading.
there is no special way of doing this with redux. This is a plain JS task. I suppose you use react as well:
function mapStoreToProps(store) {
function findMyInterestingThingy(result, key) {
// assign anything you want to result
return result;
}
return {
myInterestingThingy: Object.keys(store).reduce(findMyInterestingThingy, {})
// you dont really need to use reduce. you can have any logic you want
};
}
export default connect(mapStoreToProps)(MyComponent)
regards

How can I get viewModel object in store filter function?

I defined the store and a filter. The ViewModel contains test object I need to filter store items by this object.
Ext.define('XXX.view.XXX.ViewXXXXModel', {
extend: 'Ext.app.ViewModel',
...
stores: {
agreements: {
source: 'XXX',
filters: {
filterFn: function(item) {
return item.some_field !== this.get('test').somevalue;
}
}
}
}
I cannot access the test object of View Model from filter function?
Way too late now, but I just had the same issue, and the cleaner method to do this is by returning filterFn as a formula bind:
For your original example:
stores: {
agreements: {
source: 'XXX',
filters: [{
filterFn: '{storeFilter}'
}]
}
}
},
formulas: {
storeFilter: function(get) {
var somevalue = get('test').somevalue;
return function(item) {
return item.some_field !== this.get('test').somevalue;
};
}
}
Edit:
When I originally wrote this I wasn't aware that Ext continually added extra filters when using setFilters rather than just replacing them all. To get around this, you need to name the filter using an id. In the above example something like this:
filters: [{
id: 'myVMFilterFunction',
filterFn: '{storeFilter}'
}]
Then it replaces the filter as expected
Ideally you would use the declarative filter format in most cases - the granularity ensures that bindings are more specific, triggering appropriate / expected updates when data changes. For example:
stores: {
agreements: {
source: 'XXX',
filters: {
property: 'some_field',
value: '{test.somevalue}',
operator: '!='
}
}
}
If you really want to use imperative code you can inject the view-model scope via a formula:
formulas: {
_this: function(){
return this;
}
}
Then bind it like so:
stores: {
agreements: {
source: 'XXX',
filters: {
scope: '{_this}',
filterFn: function(item){
return item.some_field !== this.get('test.somevalue'));
}
}
}
}
This is a bit of a kludge though and changes to test likely won't be reflected in the store and any visual component tied to it. In this case you'd end up having to manually reload the store or reapply the filters - which kind of defeats the point of MVVM.

Meteor, MongoDB get part of array through subscription

I have a question about how to just get a certain element of an array using MongoDB and MeteorJS. I have the following schema for the user document:
bankList:[
{
id: "34567890987654345678",
name: "xfgchjbkn",
type: "credit"
},
{
id: "09876543456789098767"
name: "65789876t8",
type: "debit"
}
]
I first subscribe to only part of the fields in the array, specifically I gather a list of all the ids. Then I have an edit screen that should subscribe to all of the fields for a specific element in the array with a matching id. I do not want to expose the rest of the array just the single element. Currently, I use the following to first gather a list of just the ids:
Meteor.users.find({_id: this.userId},
{fields:{'bankList.id': 1}});
And the following publication-subscription method to get just a specific element's information:
Publication:
Meteor.publish("userBankAdvanced", function(bankId){
check(bankId,String);
if(this.userId){
return Meteor.users.find({_id:this.userId,"bankList.id": bankId}, {'bankList.$': 1});
}else{
this.ready();
}
});
Subscription:
this.route('edit_account', {
path: '/edit/account/',
waitOn: function(){
if(Session.get("bankId")){
return Meteor.subscribe('userBankAdvanced',Session.get("bankId"));
}
return null;
},
data: function(){
if(Session.get("bankId")){
return Meteor.users.findOne();
}
return null;
},
onBeforeAction: function(){
beforeHooks.isRevise(Session.get("bankId"));
}
});
The subscription method returns all of the elements of the array with all of the information.
I want, for example, just this (not the entire list with all of the information):
bankList:[
{
id: "34567890987654345678",
name: "xfgchjbkn",
type: "credit"
}]
It looks like you're just missing the "fields" specifier in your "userBankAdvanced" publish function. I wrote a test in meteorpad using your example and it seems to work fine. The bank id is hardcoded for simplicity there.
So instead of
return Meteor.users.find({_id:this.userId,"bankList.id": bankId}, {'bankList.$': 1});
try using
return Meteor.users.find({_id:this.userId,"bankList.id": bankId}, {fields: {'bankList.$': 1}});
No luck, in meteor the "fields" option works only one level deep. In other words there's no builtin way to include/exclude subdocument fields.
But not all is lost. You can always do it manually
Meteor.publish("userBankAdvanced", function (bankId) {
var self = this;
var handle = Meteor.users.find({
_id: self.userId, "bankList.id": bankId
}).observeChanges({
added: function (id, fields) {
self.added("users", id, filter(fields, bankId));
},
changed: function (id, fields) {
self.changed("users", id, filter(fields, bankId));
},
removed: function (id) {
self.removed("users", id);
},
});
self.ready();
self.onStop(function () {
handle.stop();
});
});
function filter(fields, bankId) {
if (_.has(fields, 'bankList') {
fields.bankList = _.filter(fields.bankList, function (bank) {
return bank.id === bankId;
});
}
return fields;
}
EDIT I updated the above code to match the question requirements. It turns out though that the Carlos answer is correct as well and it's of course much more simple, so I recommend using that one.

Handling Subsidiary Views in Backbone.js

I have a basic Backbone application which obtain an array of JSON objects from a remote service and displays them: all good so far. However, each JSON object has an array of tags and I want to display the tags in a separate area of the webpage.
My question is: what is the most Backbone-friendly way of doing this? I could parse the existing data again in a second view, which is cleaner but takes up more computation (processing the entire array twice).
An alternative is gathering up the tag information in the primary view as it is working through the array and then passing it along to the subsidiary view, but then I'm linking the views together.
Finally, I'd like to filter based on those tags (so the tags will become toggle buttons and turning those buttons on/off will filter the information in the primary view); does this make any difference to how this should be laid out?
Bonus points for code snippets.
Hm. I'm not sure if this is the Backbone-friendly way, but I'll put the logic to retrieve a list of tags (I think that's what you meant by "parse") in the collection.
Both the main view and the subview will "listen" to the same collection, and the subview will just call collection.getTags() to get a list of tags it needs.
// Model that represents the list data
var ListDataModel = Backbone.Model.extend({
defaults: function() {
return {
name: null,
tags: []
};
}
});
// Collection of list data
var ListDataCollection = Backbone.Collection.extend({
model: ListDataModel,
initialize: function() {
var me = this;
// Expires tag collection on reset/change
this.on('reset', this.expireTagCache, this);
this.on('change', this.expireTagCache, this);
},
/**
* Expires tag cache
* #private
*/
expireTagCache: function() {
this._cachedTags = null;
},
/**
* Retrieves an array of tags in collection
*
* #return {Array}
*/
getTags: function() {
if (this._cachedTags === null) {
this._cachedTags = _.union.apply(this, this.pluck('tags'));
}
return this._cachedTags;
},
sync: function(method, model, options) {
if (method === 'read') {
var me = this;
// Make an XHR request to get data for this demo
Backbone.ajax({
url: '/echo/json/',
method: 'POST',
data: {
// Feed mock data into JSFiddle's mock XHR response
json: JSON.stringify([
{ id: 1, name: 'one', tags: [ 'number', 'first', 'odd' ] },
{ id: 2, name: 'two', tags: [ 'number', 'even' ] },
{ id: 3, name: 'a', tags: [ 'alphabet', 'first' ] }
]),
},
success: function(resp) {
options.success(me, resp, options);
},
error: function() {
if (options.error) {
options.error();
}
}
});
}
else {
// Call the default sync method for other sync method
Backbone.Collection.prototype.sync.apply(this, arguments);
}
}
});
var listColl = new ListDataCollection();
listColl.fetch({
success: function() {
console.log(listColl.getTags());
}
});
I guess two reasons for handling this in the collection:
It keeps the View code cleaner (This is given that we are not doing very complex logic in the tag extraction - It's just a simple _.pluck() and _.union().
It has 0 business logic involved - It can arguably belong to the data layer.
To address the performance issue:
It does go through the collection twice - However, if the amont of data you are consuming is too much for the client to process even in this case, you may want to consider asking the Backend to provide an API endpoint for this. (Even 500 pieces of data with a total of 1000 tags shouldn't bee too much for a somewhat modern browser to handle nowadays.)
Hmm. Does this help?
JSFiddle to go with this with the collection and the model: http://jsfiddle.net/dashk/G8LaB/ (And, a log statement to demonstrate the result of .getTags()).

Resources