ExtJS 4 Nested Data Loading in other way - extjs

I think nested data loading in ExtJS 4 is impossible in real world. For example, I have some models for trainings, for users, and some model which describes how trainings assigned to users, which user assigned training, etc. I think it's wrong to get join in PHP Model (I use ZF 1.x) to get users first names and last names instead their identifiers, and it's wrong to merge data from two selects (select join with traings and trainings-to-users table and select from users where identifier in (list of identifiers from join above)) in Controller. Good practise to do it in View, isn't it?
But ExtJS 4 nested data loading tells me: make hierarchial structure (with a lot of duplicates of nested data!) and pass it me. Why I need to do this work? I'll better merge data in Controller!
My final question is: if I have JSON like this:
{
trainings: [{"id":"1", "user_id":278,"assigner_id":30, ...}, {...}],
users: ["278": {"firstname":"Guy", "lastname":"Fawkes"}, "300":{....}, ...]
}
Can I to create another nested data loading from this data?
Thanks.

Yes you can ... sort of. One way to do it is to define separate stores for users and trainings and then use loadData(JSON.users) to load data that was prefetched.

Related

How to Create Relationship between two Different Column in Neo4j

I am trying to initiate a relationship between two columns in Neo4j. my dataset is a CSV file with two-column refers to Co-Authorship and I want to Construct a Network of it. I already load the data, return them and match them.
Loading
load csv from 'file:///conet1.csv' as rec
return the data
create (:Guys {source: rec[0], target: rec[1]})
now I need to Construct the Collaboration Network of data by making a relationship between source and target columns. What do you propose for the purpose?
I was able to make a relationship between mentioned columns in NetworkX graph libray in python like this:
import pandas as pd
import networkx as nx
g = nx.Graph()
df = pd.read_excel('Colab.csv', columns= ['source', 'target'])
g = nx.from_pandas_edgelist(df,'source','target', 'weight')
If I understand your use case, I do not believe you should be creating Guys nodes just to store relationship info. Instead, the graph-oriented approach would be to create an Author node for each author and a relationship (say, of type COLLABORATED_WITH) between the co-authors.
This might work for you, or at least give you a clue:
LOAD CSV FROM 'file:///conet1.csv' AS rec
MERGE (source:Author {id: rec[0]})
MERGE (target:Author {id: rec[1]})
CREATE (source)-[:COLLABORATED_WITH]->(target)
If it is possible that the same relationship could be re-created, you should replace the CREATE with a more expensive MERGE. Also, a work can have any number of co-authors, so having a relationship between every pair may be sub-optimal depending on what you are trying to do; but that is a separate issue.

Laravel get value of column from relation through pivot on load() after all()

I have a problem retrieving values of a column from relations in Laravel.
I have a User - Model. This model has relation to a table btw. a model named Userhobbies.
For now we have:
User ::: hasMany >>> Userhobbies
Now with User::all()->load('hobbies') I'm getting right results like
{"id":"1","username":"jdoe","first_name":"Joe","last_name":"Doe","birth":"
1992-04-11","picture_id":"f3dca65323e876026b409b9ba3d49c56","hobbies":
[{"hobby_id":"1","user_id":"1"},{"hobby_id":"2","user_id":"1"},
{"hobby_id":"3","user_id":"1"},{"hobby_id":"4","user_id":"1"}]}
As you can see Userhobbies contains only primary-key relations between hobby - table (Hobby Model) and user - table (User Model).
(Hobby model also has hasMany relation to Userhobbies)
My question now is - how to retrieve all hobby-names (from hobby - table) in my call over (after load('hobbies') ) and is it possible without writting a lot of code?
For better understanding of my idea the result which I want to retrieve:
{"id":"1","username":"jdoe","first_name":"Joe","last_name":"Doe","birth":"
1992-04-11","picture_id":"f3dca65323e876026b409b9ba3d49c56","hobbies":
["golf", "cards", "games", "football"]}
EDIT:
If I try following (I tried with belongsToMany in User and Hobby):
User::with('hobbies')->get()->first()
And I'm getting the whole values from the hobbies - table:
{user-specific data ...
hobbies:[{"id":"1","name":"golf","created_at":"2015-04-07
14:15:02","updated_at":"2015-04-07 14:15:02","pivot":
{"user_id":"1","hobby_id":"1"}},
{"id":"2","name":"cards","created_at":"2015-04-07
14:15:02","updated_at":"2015-04-07 14:15:02","pivot":
{"user_id":"1","hobby_id":"2"}},
{"id":"3","name":"games","created_at":"2015-04-07
14:15:02","updated_at":"2015-04-07 14:15:02","pivot":
{"user_id":"1","hobby_id":"3"}},
{"id":"4","name":"football","created_at":"2015-04-07
14:15:02","updated_at":"2015-04-07 14:15:02","pivot":
{"user_id":"1","hobby_id":"4"}}]}
Same try with ->load('hobbies'). I really don't know how to go on.
To explain it a bit more what I need one could imagine such query as follows:
User::all(['id', 'name'])->load(array('hobbies.id','hobbies.name'))->get();
From my knowledge, I know that it's possible to use a closure to set constraints on the query that performs the load, like so:
User::all()->load(['hobbies' => function($query)
{
$query->select('id', 'name');
}]);
By doing it, when you cast it to array, it will produce a result near to what you want. You can even add 'pivot' to your $hidden property on your Hobby model to hide this information.

Firebase + AngularFire -> States?

I'd like to know how I would deal with object states in a FireBase environment.
What do I mean by states? Well, let's say you have an app with which you organize order lists. Each list consists of a bunch of orders, so it can be considered a hierarchical data structure. Furthermore each list has a state which might be one of the following:
deferred
open
closed
sent
acknowledged
ware completely received
ware partially received
something else
On the visual (HTML) side the lists shall be distinguished by their state. Each state shall be presented to the client in its own, say, div-element, listing all the related orders beneath.
So the question is, how do I deal with this state in FireBase (or any other document based database)?
structure
Do I...
... (option 1) use a state-field for each orderlist and filter on the clientside by using if or something similar:
orderlist1.state = open
order1
order2
orderlist2.state = open
order1
orderlist3.state = closed
orderlist4.state = deferred
... (option 2) use the hierarchy of FireBase to classify the orderlists like so:
open
orderlist1
order1
order2
orderlist2
order1
closed
orderlist3
deferred
orderlist4
... (option 3) take a totally different approach?
So, what's the royal road here?
retrieval, processing & visual output of option 2
Since for option 1 the answer to this question is apparantly pretty straight forward (if state == ...) I continue with option 2: how do I retrieve the data in option 2? Do I use a Firebase-object for each state, like so:
var closedRef = new Firebase("https://xxx.firebaseio.com/closed");
var openRef = new Firebase("https://xxx.firebaseio.com/open");
var deferredRef = new Firebase("https://xxx.firebaseio.com/deferred");
var somethingRef = new Firebase("https://xxx.firebaseio.com/something");
Or what's considered the best approach to deal with that sort of data/structure?
There is no universal answer to this question. The "best approach" is going to depend on the particulars of your use case, which you haven't provided here. Specifically, how you will be reading and manipulating the data.
Data architecture in NoSQL is all about working hard on writes to make reads easy. It's all about how you plan to use the data. (It's also enough material for a chapter in a book.)
The advantage to "option 1" is that you can easily iterate all the entire list. Great if your list is measured in hundreds. This is a great approach if you want to fetch the list and manipulate it on the fly on the client side.
The advantage to "option 2" is that you can easily grab a subset of the list. Great if your list is measured in thousands and you will typically be fetching open issues only rather than closed ones. This is great for archiving/new/old lists like yours.
There are other options as well.
Sorted Data using Priorities
Perhaps the most universal approach is to use ordered data. This allows you to query a subset of your records using something like:
new Firebase(URL).startAt('open').endAt('open').limit(10);
This is sufficient in most cases where you have only one criteria, or when you can create a unique identifier from multiple criteria (e.g. 'open:marketing') without difficulty. Examples are scoreboards, state lists like yours, data ordered by timestamps.
Using an index
You can also create custom subsets of your data by creating an index of keys and using that to fetch the others.
This is most useful when there is no identifiable characteristic of your subsets. For example, if I pick them from a list and store my favorites.
I think my this plnkr can help you for this.
Here, click on edit/add and just check the country(order in your case) - State(state in your case) dependent dropdown may be the same as you want.just one single thing you may need to add is filter it.
They both are different tables in db.
You can also get it from git.

Django Query Optimisation

I am working currently on telecom analytics project and newbie in query optimisation. To show result in browser it takes a full minute while just 45,000 records are to be accessed. Could you please suggest on ways to reduce time for showing results.
I wrote following query to find call-duration of a person of age-group:
sigma=0
popn=len(Demo.objects.filter(age_group=age))
card_list=[Demo.objects.filter(age_group=age)[i].card_no
for i in range(popn)]
for card in card_list:
dic=Fact_table.objects.filter(card_no=card.aggregate(Sum('duration'))
sigma+=dic['duration__sum']
avgDur=sigma/popn
Above code is within for loop to iterate over age-groups.
Model is as follows:
class Demo(models.Model):
card_no=models.CharField(max_length=20,primary_key=True)
gender=models.IntegerField()
age=models.IntegerField()
age_group=models.IntegerField()
class Fact_table(models.Model):
pri_key=models.BigIntegerField(primary_key=True)
card_no=models.CharField(max_length=20)
duration=models.IntegerField()
time_8bit=models.CharField(max_length=8)
time_of_day=models.IntegerField()
isBusinessHr=models.IntegerField()
Day_of_week=models.IntegerField()
Day=models.IntegerField()
Thanks
Try that:
sigma=0
demo_by_age = Demo.objects.filter(age_group=age);
popn=demo_by_age.count() #One
card_list = demo_by_age.values_list('card_no', flat=True) # Two
dic = Fact_table.objects.filter(card_no__in=card_list).aggregate(Sum('duration') #Three
sigma = dic['duration__sum']
avgDur=sigma/popn
A statement like card_list=[Demo.objects.filter(age_group=age)[i].card_no for i in range(popn)] will generate popn seperate queries and database hits. The query in the for-loop will also hit the database popn times. As a general rule, you should try to minimize the amount of queries you use, and you should only select the records you need.
With a few adjustments to your code this can be done in just one query.
There's generally no need to manually specify a primary_key, and in all but some very specific cases it's even better not to define any. Django automatically adds an indexed, auto-incremental primary key field. If you need the card_no field as a unique field, and you need to find rows based on this field, use this:
class Demo(models.Model):
card_no = models.SlugField(max_length=20, unique=True)
...
SlugField automatically adds a database index to the column, essentially making selections by this field as fast as when it is a primary key. This still allows other ways to access the table, e.g. foreign keys (as I'll explain in my next point), to use the (slightly) faster integer field specified by Django, and will ease the use of the model in Django.
If you need to relate an object to an object in another table, use models.ForeignKey. Django gives you a whole set of new functionality that not only makes it easier to use the models, it also makes a lot of queries faster by using JOIN clauses in the SQL query. So for you example:
class Fact_table(models.Model):
card = models.ForeignKey(Demo, related_name='facts')
...
The related_name fields allows you to access all Fact_table objects related to a Demo instance by using instance.facts in Django. (See https://docs.djangoproject.com/en/dev/ref/models/fields/#module-django.db.models.fields.related)
With these two changes, your query (including the loop over the different age_groups) can be changed into a blazing-fast one-hit query giving you the average duration of calls made by each age_group:
age_groups = Demo.objects.values('age_group').annotate(duration_avg=Avg('facts__duration'))
for group in age_groups:
print "Age group: %s - Average duration: %s" % group['age_group'], group['duration_avg']
.values('age_group') selects just the age_group field from the Demo's database table. .annotate(duration_avg=Avg('facts__duration')) takes every unique result from values (thus each unique age_group), and for each unique result will fetch all Fact_table objects related to any Demo object within that age_group, and calculate the average of all the duration fields - all in a single query.

Paging arrays in mongodb subdocument

I have a mongo collection with documents that have a schema structured like the following:
{ _id : bla,
fname : foo,
lname : bar,
subdocs [ { subdocname : doc1
field1 : one
field2 : two
potentially_huge_array : [...]
}, ...
]
}
I'm using the ruby mongo driver that currently does not support elemMatch. I use an aggregation when extracting from subdocs via a project, unwind and match pipeline.
What I would now like to do is to page results from the potentially_huge_array array contained in the subdocument. I have not been able to figure out how to grab just a subset of the array without dragging the entire subdoc, huge array and all, out of the db into my app.
Is there some way to do this?
Would a different schema be a better way to handle this?
Depending on how huge is huge, you definitely don't want it embedded into another document.
The main reason is that unless you always want the array returned with the document, you probably don't want to store it as part of the document. How you can store it in another collection would depend on exactly how you want to access it.
Reviewing the types of queries you most often perform on your data will usually suggest the best schema - one that will allow you to be efficient about number of queries, the amount of data returned and ease of indexing the data.
If you field really huge and changes often, just placed it in separate collection.

Resources