Multiple grids from the same store - extjs

I need to display product-wise sales data on grid panels on ExtJs3.2 - one grid per product.
The data is received year-wise,and is loaded into a jsonstore.
{"list": [
{
"Year": "2014",
"product": "IS",
"total": "5.0",
},
{
"Year": "2013",
"product": "IS",
"total": "5.6",
},
{
"Year": "2014",
"product": "NS",
"total": "5.7",
},
{
"Year": "2013",
"product": "NS",
"total": "5.1",
}
....
......
]
}
The response is processed to convert into a product-specific 'keyed' dictionary.
{ "IS":[{"Year":"2013","total":"5.1"},{"Year":"2013","total":"5.1"}..],
"NS":[{"Year":"2013","total":"5.1"},{"Year":"2013","total":"5.1"}..],
..}
Each key(and values) are then loaded into separate array stores to feed the respective grids.
Though simplistic - too many objects/structures are being created to achieve this.
Is there a more elegant way to load multiple grids from extracts of the store data?

If you have important reasons not to switch to MVC ExtJS 4 or MVVC ExtJS 5 you could provide data the needed way on server side. E.g. by Node.js what is fetching the JSON and reworking this. Your request could be more specific and your client App would become faster.
An other way would be to write convert() functions into your Ext.data fields and use them as source for the grid.

Related

Get documents within the last n days using mongoose

I have MongoDB documents in the following format which represent a location:
{
"_id": "632987336a2913f401318f82",
"name": "1 Ave, Infinite Loop St",
"timezone": "Europe/Paris",
"location": {
"lat": 48.858093,
"lng": 2.294694
},
"detections": [
{
"approach": "North Bound",
"class": "Person",
"datetime": "05:00 pm",
"lane": 3,
"movement": "Through"
},
{
"approach": "North Bound",
"class": "Person",
"datetime": "05:00 pm",
"lane": 3,
"movement": "Through"
},
// ...more detections
]
}
I know the _id of the site that I want to fetch but what I want to do is that when I fetch this site, I only need the detections from the last 7 days using the datetime property in the detections object. However, I am unsure of how to achieve this using mongoose. Currently, I am trying to achieve this manually by just fetching all the site detections and then filtering them by myself but this is a tedious approach. Currently, this is how I am fetching the site detections:
const site = await SiteModel.findById(siteID);
const detections = site.detections;
This answer tells of a way to implement this behavior but this is using aggregate and in my case the key is nested in an array of objects. I am new to MongoDB so any help is appreciated.+
you need to store timestamps in "datetime" fields.
you can use this $elemMatch to filter specific data from the array of objects.

How to add child entities without id to parent in state normalized with normalizr

I've recently started using normalizr with zustand in a new React app. It's been a very good experience so far, having solved most of the painful problems I've had in the past.
I've just bumped into an issue I can't think of a clean way of solving for the past few days.
Imagine I have a normalizr-normalized state looking like:
{
"entities": {
"triggers": {
"1": {
"id": 1,
"condition": "WHEN_CURRENCY_EXCHANGED",
"enabled": true,
"value": "TRY"
},
"2": {
"id": 2,
"condition": "WHEN_CURRENCY_EXCHANGED",
"enabled": true,
"value": "GBP"
},
"3": {
"id": 3,
"condition": "WHEN_TRANSACTION_CREATED",
"enabled": true,
"value": true
}
},
"campaigns": {
"19": {
"id": 19,
"name": "Some campaign name",
"triggers": [
1,
2,
3
]
}
}
},
"result": 19
}
And we have a page that allows a user to add one or more triggers to the campaign and then save them. The problem is that at the time of adding these triggers, they do not have an id until the user clicks the Save button (ids are generated by the database). When the Save button is clicked, the state is being denormalized (via normalizr's denormalize function) and sent as payload to the backend looking like the following:
{
"id": 19,
"name": "Some campaign name",
"triggers": [
{
"id": 1,
"condition": "WHEN_CURRENCY_EXCHANGED",
"enabled": true,
"value": "TRY"
},
{
"id": 2,
"condition": "WHEN_CURRENCY_EXCHANGED",
"enabled": true,
"value": "GBP"
},
{
"id": 3,
"condition": "WHEN_TRANSACTION_CREATED",
"enabled": true,
"value": true
}
]
}
The problem is that if the user adds an entity to the triggers, it does not have an id as ids are generated by the database and I cannot find a proper way to add it to the state (due to the id-based nature of normalized states).
The only workaround I can think of is generating some temporary IDs (e.g. uuid) when a trigger is added on the front-end but is not yet saved and then going over each entity upon denormalization, doing something like if (isUuid(trigger.id)) delete trigger.id, which seems too tedious and workaroundish.
Appreciate your help.
P.S. There is something similar explained here. The problem is that in our case the generateId('comment') logic is happening on the backend.
A simple solution is to split.
The create trigger API call and the add trigger to campaign API call.
Do the first, then save the trigger into the normalized store with the id generated by the backend.
Then add it to the campaign.

Mapping through multiple arrays of objects and comparing ids

So I have a bit of a dilemma. I am creating a forum and I am calling 3 different API's. Suppose I have multiple JSONs:
let forumPost= [{
"userId": 1,
"postId": 1,
"postTitle": "I am the post title",
"postBody": "I am the post body"
}]
let forumUser= [{
"userId": 1,
"name": "Someone Someone",
"username": "Som",
}]
let forumComment = [{
"postId": 7,
"userId" : 10,
"body": "I don't like your post",
}]
I would now like to create something like this:
Post title
Post body
By user
-------------
CommentUser: CommentBody
CommentUser: CommentBody
In order to do this I am currently sort of doing nest mapping 3 times 3 different arrays of objects and comparing id's so:
forumPost.map(...
forumUser.map(...
forumComment.filter(...
(post.userId === user.userId){
<div>...</div>
}
I was wondering if there is a more efficient way of mapping and comparing multiple nested objects in vanilla ReactJS?

How to append data to a resultset in cakephp 3

I have a simple json view like this
{
"articles": [
{
"id": 1,
"title": "Article one",
},
{
"id": 2,
"title": "Article two",
}
]
}
This results are paginated, and i'd like to append the pageCount to the results, something like this.
{
"articles": [
{
"id": 1,
"title": "Article one",
},
{
"id": 2,
"title": "Article two",
}
],
"pageCount": 5
}
How can i achieve this? i can't append it directly because it's a resultset object.
Should i be doing this in the view/controller/model?
thanks a lot!
The _serialize key
Given that you are using the JsonView and auto-serialization
Cookbook > Views > JSON and XML Views > Using Data Views with the Serialize Key
you can simply add another variable for the view, and mark it for serialization, like
$this->set('articles', $this->paginate());
$this->set('pageCount', $this->request->params['paging'][$this->Articles->alias()]['pageCount']);
$this->set('_serialize', ['articles', 'pageCount']);
Using view templates
If you're not using auto-serialization, but custom view templates
Cookbook > Views > JSON and XML Views > Using a Data View with Template Files
you can make use of the PaginatorHelper in your template and add the data when converting to JSON, like
$pageCount = $this->Paginator->param('pageCount');
echo json_encode(compact('articles', 'pageCount'));

MongoDB Array Query Performance

I'm trying to figure out what the best schema is for a dating site like app. User's have a listing (possibly many) and they can view other user listings to 'like' and 'dislike' them.
Currently i'm just storing the other persons listing id in a likedBy and dislikedBy array. When a user 'likes' a listing, it puts their listing id into the 'liked' listings arrays. However I would now like to track the timestamp that a user likes a listing. This would be used for a user's 'history list' or for data analysis.
I would need to do two separate queries:
find all active listings that this user has not liked or disliked before
and for a user's history of 'liked'/'disliked' choices
find all the listings user X has liked in chronological order
My current schema is:
listings
_id: 'sdf3f'
likedBy: ['12ac', 'as3vd', 'sadf3']
dislikedBy: ['asdf', 'sdsdf', 'asdfas']
active: bool
Could I do something like this?
listings
_id: 'sdf3f'
likedBy: [{'12ac', date: Date}, {'ds3d', date: Date}]
dislikedBy: [{'s12ac', date: Date}, {'6fs3d', date: Date}]
active: bool
I was also thinking of making a new collection for choices.
choices
Id
userId // id of current user making the choice
userlistId // listing of the user making the choice
listingChoseId // the listing they chose yes/no
type
date
I'm not sure of the performance implications of having these choices in another collection when doing the find all active listings that this user has not liked or disliked before.
Any insight would be greatly appreciated!
Well you obviously thought it was a good idea to have these embedded in the "listings" documents so your additional usage patterns to the cases presented here worked properly. With that in mind there is no reason to throw that away.
To clarify though, the structure you seem to want is something like this:
{
"_id": "sdf3f",
"likedBy": [
{ "userId": "12ac", "date": ISODate("2014-04-09T07:30:47.091Z") },
{ "userId": "as3vd", "date": ISODate("2014-04-09T07:30:47.091Z") },
{ "userId": "sadf3", "date": ISODate("2014-04-09T07:30:47.091Z") }
],
"dislikedBy": [
{ "userId": "asdf", "date": ISODate("2014-04-09T07:30:47.091Z") },
{ "userId": "sdsdf", "date": ISODate("2014-04-09T07:30:47.091Z") },
{ "userId": "asdfas", "date": ISODate("2014-04-09T07:30:47.091Z") }
],
"active": true
}
Which is all well and fine except that there is one catch. Because you have this content in two array fields you would not be able to create an index over both of those fields. That is a restriction where only one array type of field (or multikey) can be be included within a compound index.
So to solve the obvious problem with your first query not being able to use an index, you would structure like this instead:
{
"_id": "sdf3f",
"votes": [
{
"userId": "12ac",
"type": "like",
"date": ISODate("2014-04-09T07:30:47.091Z")
},
{
"userId": "as3vd",
"type": "like",
"date": ISODate("2014-04-09T07:30:47.091Z")
},
{
"userId": "sadf3",
"type": "like",
"date": ISODate("2014-04-09T07:30:47.091Z")
},
{
"userId": "asdf",
"type": "dislike",
"date": ISODate("2014-04-09T07:30:47.091Z")
},
{
"userId": "sdsdf",
"type": "dislike",
"date": ISODate("2014-04-09T07:30:47.091Z")
},
{
"userId": "asdfas",
"type": "dislike",
"date": ISODate("2014-04-09T07:30:47.091Z")
}
],
"active": true
}
This allows an index that covers this form:
db.post.ensureIndex({
"active": 1,
"votes.userId": 1,
"votes.date": 1,
"votes.type": 1
})
Actually you will probably want a few indexes to suit your usage patterns, but the point is now can have indexes you can use.
Covering the first case you have this form of query:
db.post.find({ "active": true, "votes.userId": { "$ne": "12ac" } })
That makes sense considering that you clearly are not going to have both an like and dislike option for each user. By the order of that index, at least active can be used to filter because your negating condition needs to scan everything else. No way around that with any structure.
For the other case you probably want the userId to be in an index before the date and as the first element. Then your query is quite simple:
db.post.find({ "votes.userId": "12ac" })
.sort({ "votes.userId": 1, "votes.date": 1 })
But you may be wondering that you suddenly lost something in that getting the count of "likes" and "dislikes" was as easy as testing the size of the array before, but now it's a little different. Not a problem that cannot be solved using aggregate:
db.post.aggregate([
{ "$unwind": "$votes" },
{ "$group": {
"_id": {
"_id": "$_id",
"active": "$active"
},
"likes": { "$sum": { "$cond": [
{ "$eq": [ "$votes.type", "like" ] },
1,
0
]}},
"dislikes": { "$sum": { "$cond": [
{ "$eq": [ "$votes.type", "dislike" ] },
1,
0
]}}
])
So whatever your actual usage form you can store any important parts of the document to keep in the grouping _id and then evaluate the count of "likes" and "dislikes" in an easy manner.
You may also not that changing an entry from like to dislike can also be done in a single atomic update.
There is much more you can do, but I would prefer this structure for the reasons as given.

Resources