Two-way synchronization between View Objects - oracle-adf

I have two view objects in Oracle ADF.
LineVO represents order lines -- with one line per product.
Products are differentiated by several attributes... say "model" and "color". So, VO #1 contains a row for each model/color combination.
ModelVO represents a model-level summary of the lines.
Both VOs have a "quantity" field (an Integer).
There is a ViewLink between them and each has a row accessor to the other.
I want to achieve two-way coordination between these two view objects, such that:
When a user queries data, ModelVO.Quantity equals the sum of LineVO.Quantity, for the associated rows
When a user updates any LineVO.Quantity, the ModelVO.Quantity is immediately updated to reflect the new total
When a user updates a ModelVO.Quantity, the quantity is spread among the associated LineVO rows (according to complex business logic which I hope is not relevant here).
I have tried many different ways to do this and cannot get it perfect.
Right now, I am working on variations where ModelVO.Quantity is set to a Groovy expression "LineVO.sum('Quantity')". Unfortunately, everything I try either has the summing from LineVO->ModelVO working or the spreading from ModelVO->LineVO working, but never both at the same time.
Can someone suggest a way to do this? I want to do it in the model layer (either a EO or VO or combination).

Nevermind.. it turns out to be simple:
ModelVO.Quantity must be set to a Groovy "LineVO.sum('Quantity')" and it must have a recalcExpression set to a method where I can control things so it only recalculates when I am changing a LineVO.Quantity value.
The reason my approach didn't work initially was because, when the user updated a LineVO.Quantity value and I wanted to recalculate, I was getting the ModelVO row by lineVORow.getModelVO()... i.e., via a view accessor.
Apparently, that returns some sort of internal copy of the row and not the actual row.
When I got the parent row via the applicationModule.getModelVO().getCurrentRow(), the whole thing works perfectly.
I've posted another question about why accessing the row via the view accessor did not work. That part is still a mystery to me.

Related

Angularjs slow with a table

I'm having a table with about 30 rows and about 10 columns. The rows are a subrange of a much bigger set (which I do manually in order to avoid huge DOM). The columns are stored in a list like [{name: "firstname", width: 200}, {name: "married", type: "bool"}], which allows some flexibility (like showing the property "married" as a checkbox).
So there are just about 300 fields, yet the digest cycle takes about one second (on my i5-2400 CPU # 3.10GHz).
I'm having troubles interpreting the Batarang performance page. It says
p.name | translate 16.0% 139.6ms
e[c.name] 15.8% 138.4ms
c.name | translate 11.1% 96.3ms
The meaning of the (sparsely named) variables is clear to me:
e stands for entity, i.e., table row.
p stands for property and only occurs outside of the table.
c stands for column.
e[c.name] stands for the field content (from entity e the property named by c).
But the performance figures make little sense:
p.name gets only used maybe 10 times, how can it take that long?
c.name | translate occurs only 10 times, too (in the header row), how can it take that long?
I'm aware of {::a_once_only_bound_expression}, and I tried it, but without much success. What I'd actually need is the following:
When c changes, re-create the whole table (this happens only exceptionally, so I don't care about speed).
When e changes, re-create its whole row (when there's a change, then only in a single row).
Any way to achieve this?
A solution idea
I guess, what I need could be achieved using a directive stripping off all angular stuff from the row after rendering:
drop all child scopes
with all their watches
but keep all the HTML and listeners
I could add a single watch per row responsible for the repaint if needed.
Does it make sense?
Update
I've been rather busy working on the application - improving other things than performance. I was very lucky and got some performance as a bonus. Then I simplified the pages a little bit and the speed is acceptable now. At least for now.
Still:
I don't trust the above Batarang performance values.
I'm still curious how to implement the above solution idea and if it makes sense.
You may wanna look NgTable which places inputs from json data as rows and columns maybe solves your performance issue either, I recommend checkout for that

Order Coredata Sub-entity

My entities called Score contains entites called Messages. It's a one-to-many relationship, we can have many Messages for a single Score.
When I'm fetching my Score, i can access my Messages objects in an NSOrderedSet, because I ticked Ordered in the .xcdatamodel file.
They're just not ordered properly and I'd like to know if that can be fixed.
I'm displaying them in a tableview from an array built like this
//_score exists and is set properly.
someArray = array-alloc-init;
for (Message *msg in _score.messages)
{
//Do my stuff because objects need remodeling in that view
//
[someArray addObject:msg];
}
[tableview reloadData]; //the tableview uses someArray
everything works like a charm except it's just not in the right order.
Where (if possible) can I tell the model to order by "CreationDate" for example. The "Ordered" tickbox seems to order it in a way that doesn't work.
The ordered set is in whatever order you originally populated the relationship (either in one go or as items were added to the end of the relationship).
So, if you can guarantee that you can add them in order or you write some code to insert into the relationship in the correct place then you can continue with your current code.
Alternatively, you can create a fetch request that uses the relationship 'backwards' to find Messages for a specified Score and sorts them appropriately. The main benefit here is that you can decide to change the sort order if you want to on-the-fly, you can specify multiple sort orderings (allowing the user to change if they want) and you can explicitly set the fetch batch size (which can help if you have lots of messages).

Django Query Optimisation

I am working currently on telecom analytics project and newbie in query optimisation. To show result in browser it takes a full minute while just 45,000 records are to be accessed. Could you please suggest on ways to reduce time for showing results.
I wrote following query to find call-duration of a person of age-group:
sigma=0
popn=len(Demo.objects.filter(age_group=age))
card_list=[Demo.objects.filter(age_group=age)[i].card_no
for i in range(popn)]
for card in card_list:
dic=Fact_table.objects.filter(card_no=card.aggregate(Sum('duration'))
sigma+=dic['duration__sum']
avgDur=sigma/popn
Above code is within for loop to iterate over age-groups.
Model is as follows:
class Demo(models.Model):
card_no=models.CharField(max_length=20,primary_key=True)
gender=models.IntegerField()
age=models.IntegerField()
age_group=models.IntegerField()
class Fact_table(models.Model):
pri_key=models.BigIntegerField(primary_key=True)
card_no=models.CharField(max_length=20)
duration=models.IntegerField()
time_8bit=models.CharField(max_length=8)
time_of_day=models.IntegerField()
isBusinessHr=models.IntegerField()
Day_of_week=models.IntegerField()
Day=models.IntegerField()
Thanks
Try that:
sigma=0
demo_by_age = Demo.objects.filter(age_group=age);
popn=demo_by_age.count() #One
card_list = demo_by_age.values_list('card_no', flat=True) # Two
dic = Fact_table.objects.filter(card_no__in=card_list).aggregate(Sum('duration') #Three
sigma = dic['duration__sum']
avgDur=sigma/popn
A statement like card_list=[Demo.objects.filter(age_group=age)[i].card_no for i in range(popn)] will generate popn seperate queries and database hits. The query in the for-loop will also hit the database popn times. As a general rule, you should try to minimize the amount of queries you use, and you should only select the records you need.
With a few adjustments to your code this can be done in just one query.
There's generally no need to manually specify a primary_key, and in all but some very specific cases it's even better not to define any. Django automatically adds an indexed, auto-incremental primary key field. If you need the card_no field as a unique field, and you need to find rows based on this field, use this:
class Demo(models.Model):
card_no = models.SlugField(max_length=20, unique=True)
...
SlugField automatically adds a database index to the column, essentially making selections by this field as fast as when it is a primary key. This still allows other ways to access the table, e.g. foreign keys (as I'll explain in my next point), to use the (slightly) faster integer field specified by Django, and will ease the use of the model in Django.
If you need to relate an object to an object in another table, use models.ForeignKey. Django gives you a whole set of new functionality that not only makes it easier to use the models, it also makes a lot of queries faster by using JOIN clauses in the SQL query. So for you example:
class Fact_table(models.Model):
card = models.ForeignKey(Demo, related_name='facts')
...
The related_name fields allows you to access all Fact_table objects related to a Demo instance by using instance.facts in Django. (See https://docs.djangoproject.com/en/dev/ref/models/fields/#module-django.db.models.fields.related)
With these two changes, your query (including the loop over the different age_groups) can be changed into a blazing-fast one-hit query giving you the average duration of calls made by each age_group:
age_groups = Demo.objects.values('age_group').annotate(duration_avg=Avg('facts__duration'))
for group in age_groups:
print "Age group: %s - Average duration: %s" % group['age_group'], group['duration_avg']
.values('age_group') selects just the age_group field from the Demo's database table. .annotate(duration_avg=Avg('facts__duration')) takes every unique result from values (thus each unique age_group), and for each unique result will fetch all Fact_table objects related to any Demo object within that age_group, and calculate the average of all the duration fields - all in a single query.

How to filter rows with null refrences in Google app engine DB

I have a Model UnitPattern, which reference another Model UnitPatternSet
e.g.
class UnitPattern(db.Model):
unit_pattern_set = db.ReferenceProperty(UnitPatternSet)
in my view I want to display all UnitPatterns having unit_pattern_set refrences as None, but query UnitPattern.all().filter("unit_pattern_set =", None) returns nothing, though I have total 5 UnitPatterns, out of which 2 have 'unit_pattern_set' set and 3 doesn't have
e.g.
print 'Total',UnitPattern.all().count()
print 'ref set',UnitPattern.all().filter("unit_pattern_set !=", None).count()
print 'ref not set',UnitPattern.all().filter("unit_pattern_set =", None).count()
outputs:
Total 5
ref set 2
ref not set 0
Shouldn't sum of query 2 and 3 be equal to query 1 ?
Reason seems to be that I added reference property unit_pattern_set later on, and these UnitPattern objects existed before that, but then how can I filter such entities?
This is described succinctly in the docs:
An index only contains entities that
have every property referred to by the
index. If an entity does not have a
property referred to by an index, the
entity will not appear in the index,
and will never be a result for the
query that uses the index.
Note that
the App Engine datastore makes a
distinction between an entity that
does not possess a property and an
entity that possesses the property
with a null value (None). If you want
every entity of a kind to be a
potential result for a query, you can
use a data model that assigns a
default value (such as None) to
properties used by query filters.
In your case, you have 3 entities that don't have the unit_pattern_set property set at all (because that property wasn't defined in the Model at the time those entities were created) - therefore those properties doesn't exist in the database representation of that entity, therefore that entity does not appear in the index of that property for that kind of entity.
Dan Sanderson's book Programming Google App Engine explains this in great detail on ~page 150 (unfortunately not available in the Google Books preview)
To fix the models you already have, you'll have to iterate over a query on UnitPattern (I've not tested the following code, please check it before you run it on your live data):
patterns = UnitPattern.all()
for pattern in patterns:
if not pattern.unit_pattern_set:
pattern.unit_pattern_set = None
pattern.put()
Edit: Also, the Updating you model's schema article discuss strategies you can use to handle schema changes such as this in future. However, that article is quite old and its method requires a web browser to keep hitting a url to trigger the next job to update more records - now that Task Queues exist, you could use a series of Tasks to make the change. The article on using deferred.defer has a framework you could utilise - it does a small amount of work, catches the DeadlineExceededError, and uses the handler to queue a new task which picks up where the current task left off.

Autocomplete Dropdown - too much data, timing out

So, I have an autocomplete dropdown with a list of townships. Initially I just had the 20 or so that we had in the database... but recently, we have noticed that some of our data lies in other counties... even other states. So, the answer to that was buy one of those databases with all towns in the US (yes, I know, geocoding is the answer but due to time constraints we are doing this until we have time for that feature).
So, when we had 20-25 towns the autocomplete worked stellarly... now that there are 80,000 it's not as easy.
As I type I am thinking that the best way to do this is default to this state, then there will be much less. I will add a state selector to the page that defaults to NJ then you can pick another state if need be, this will narrow down the list to < 1000. Though, I may have the same issue? Does anyone know of a work around for an autocomplete with a lot of data?
should I post teh codez of my webservice?
Are you trying to autocomplete after only 1 character is typed? Maybe wait until 2 or more...?
Also, can you just return the top 10 rows, or something?
Sounds like your application is suffocating on the amount of data being returned, and then attempted to be rendered by the browser.
I assume that your database has the proper indexes, and you don't have a performance problem there.
I would limit the results of your service to no more than say 100 results. Users will not look at any more than that any how.
I would also only being retrieving the data from the service once 2 or 3 characters are entered which will further reduce the scope of the query.
Good Luck!
Stupid question maybe, but... have you checked to make sure you have an index on the town name column? I wouldn't think 80K names should be stressing your database...
I think you're on the right track. Use a series of cascading inputs, State -> County -> Township where each succeeding one grabs the potential population based on the value of the preceding one. Each input would validate against its potential population to avoid spurious inputs. I would suggest caching the intermediate results and querying against them for the autocomplete instead of going all the way back to the database each time.
If you have control of the underlying SQL, you may want to try several "UNION" queries instead of one query with several "OR like" lines in its where clause.
Check out this article on optimizing SQL.
I'd just limit the SQL query with a TOP clause. I also like using a "less than" instead of a like:
select top 10 name from cities where #partialname < name order by name;
that "Ce" will give you "Cedar Grove" and "Cedar Knolls" but also "Chatham" & "Cherry Hill" so you always get ten.
In LINQ:
var q = (from c in db.Cities
where partialname < c.Name
orderby c.Name
select c.Name).Take(10);

Resources