Best database design (model) for user tables - database

I'm developping a web application using google appengine and django, but I think my problem is more general.
The users have the possibility to create tables, look: tables are not represented as TABLES in the database. I give you an example:
First form:
Name of the the table: __________
First column name: __________
Second column name: _________
...
The number of columns is not fixed, but there is a maximum (100 for example). The type in every columns is the same.
Second form (after choosing a particular table the user can fill the table):
column_name1: _____________
column_name2: _____________
....
I'm using this solution, but it's wrong:
class Table(db.Model):
name = db.StringProperty(required = True)
class Column(db.Model):
name = db.StringProperty(required = True)
number = db.IntegerProperty()
table = db.ReferenceProperty(table, collection_name="columns")
class Value(db.Model):
time = db.TimeProperty()
column = db.ReferenceProperty(Column, collection_name="values")
when I want to list a table I take its columns and from every columns I take their values:
data = []
for column in data.columns:
column_data = []
for value in column.values:
column_data.append(value.time)
data.append(column_data)
data = zip(*data)
I think that the problem is the order of the values, because it is not true that the order for one column is the same for the others. I'm waiting for this bug (but until now I never seen it):
Table as I want: as I will got:
a z c a e c
d e f d h f
g h i g z i
Better solutions? Maybe using ListProperty?

Here's a data model that might do the trick for you:
class Table(db.Model):
name = db.StringProperty(required=True)
owner = db.UserProperty()
column_names = db.StringListProperty()
class Row(db.Model):
values = db.ListProperty(yourtype)
table = db.ReferenceProperty(Table, collection_name='rows')
My reasoning:
You don't really need a separate entity to store column names. Since all columns are of the same data type, you only need to store the name, and the fact that they are stored in a list gives you an implicit order number.
By storing the values in a list in the Row entity, you can use an index into the column_names property to find the matching value in the values property.
By storing all of the values for a row together in a single entity, there is no possibility of values appearing out of their correct order.
Caveat emptor:
This model will not work well if the table can have columns added to it after it has been populated with data. To make that possible, every time that a column is added, every existing row belonging to that table would have to have a value appended to its values list. If it were possible to efficiently store dictionaries in the datastore, this would not be a problem, but list can really only be appended to.
Alternatively, you could use Expando...
Another possibility is that you could define the Row model as an Expando, which allows you to dynamically create properties on an entity. You could set column values only for the columns that have values in them, and that you could also add columns to the table after it has data in it and not break anything:
class Row(db.Expando):
table = db.ReferenceProperty(Table, collection_name='rows')
#staticmethod
def __name_for_column_index(index):
return "column_%d" % index
def __getitem__(self, key):
# Allows one to get at the columns of Row entities with
# subscript syntax:
# first_row = Row.get()
# col1 = first_row[1]
# col12 = first_row[12]
value = None
try:
value = self.__dict__[Row.__name_for_column_index]
catch KeyError:
# The given column is not defined for this Row
pass
return value
def __setitem__(self, key, value):
# Allows one to set the columns of Row entities with
# subscript syntax:
# first_row = Row.get()
# first_row[5] = "New values for column 5"
self.__dict__[Row.__name_for_column_index] = value
# In order to allow efficient multiple column changes,
# the put() can go somewhere else.
self.put()

Why don't you add an IntegerProperty to Value for rowNumber and increment it every time you add a new row of values and then you can reconstruct the table by sorting by rowNumber.

You're going to make life very hard for yourself unless your user's 'tables' are actually stored as real tables in a relational database. Find some way of actually creating tables and use the power of an RDBMS, or you're reinventing a very complex and sophisticated wheel.

This is the conceptual idea I would use:
I would create two classes for the data-store:
table this would serve as a
dictionary, storing the structure of
the pseudo-tables your app would
create. it would have two fields :
table_name, column_name,
column_order . where column_order
would give the position of the
column within the table
data
this would store the actual data in
the pseudo-tables. it would have
four fields : row_id, table_name,
column_name , column_data. row_id
would be the same for data
pertaining to the same row and would
be unique for data across the
various pseudo-tables.

Put the data in a LongBlob.
The power of a database is to be able to search and organise data so that you are able to get only the part you want for performances and simplicity issues : you don't want the whole database, you just want a part of it and want it fast. But from what I understand, when you retrieve a user's data, you retrieve it all and display it. So you don't need to sotre the data in a normal "database" way.
What I would suggest is to simply format and store the whole data from a single user in a single column with a suitable type (LongBlob for example). The format would be an object with a list of columns and rows of type. And you define the object in whatever language you use to communicate with the database.
The columns in your (real) database would be : User int, TableNo int, Table Longblob.
If user8 has 3 tables, you will have the following rows :
8, 1, objectcontaintingtable1;
8, 2, objectcontaintingtable2;
8, 3, objectcontaintingtable3;

Related

Indexing in room prepackaged database not working

I have a table in a prepackaged database whose index I created using this query
CREATE INDEX index_less_equal_to_L ON entries_less_equal_to_L(entry_word);
And I specificied in my room entity using this
#Entity(tableName = "entries_less_equal_to_L", indices = {#Index("index_less_ equal_to_L"), #Index(value = "entry_word")})
But it doesn't work, shows me this
[build error output]: https://i.stack.imgur.com/lQmtp.png
What I'm I doing wrong?
You are trying to create 2 indicies, 1 for each #Index.
The first will be named index_less_ equal_to_L but have no columns (hence the error message).
The second will be created (successfully) with a generated name and be on the entry_word column. The name of the index will be as per :-
If not set, Room will set it to the list of columns joined by '' and prefixed by "index${tableName}". So if you have a table with name "Foo" and with an index of {"bar", "baz"}, generated index name will be "index_Foo_bar_baz".
https://developer.android.com/reference/androidx/room
So you need to combine the two into a single index.
You could use either (note the space between _ and equal_to_L removed as I assume that this is a typing error)
indices = {#Index(name = "index_less_equal_to_L",value = "entry_word")}
or
indices = {Index(name ="index_less_equal_to_L",value = {"entry_word"})}
The latter being the more flexible as it allows for composite indicies (multiple columns).

How to make dynamic references to tables in Anylogic?

I`ve modeled six machines. Each of them has a different profile of electricity load. The load profile is provided in a table in AnyLogic. Every machine has an own table storing these values. I iterate trough the values to implement the same in TableFunctions. Now I face the following challenge: How can I make a dynamic reference to the relevant table. I would like to pick a specific table in dependence of a machine indice. How can I define a variable that dynamically refers to the relevant table object?
Thank you for your help!
not sure it is really necessary in your case but here goes:
You can store a reference to a database table to a variable of the following type:
com.mysema.query.sql.RelationalPathBase
When selecting values of double (int, String, etc.) type in a particular column, you may get the column by index calling variable.getColumns().get(index). Then you need to cast it to the corresponding type like below:
List<Double> resultRows = selectFrom(variable).where(
( (com.mysema.query.types.path.NumberPath<Double>) variable.getColumns().get(1) ).eq(2.0))
.list(( (com.mysema.query.types.path.NumberPath<Double>) variable.getColumns().get(1) ));
Are you always going to have a finite number of machines and how is your load profile represented? If you have a finite number of machines, and the load profile is a set of individual values - or indeed as long as you can hold those values in a single field per element - then you can create a single table, e.g. machine_load_profile, where the first column is load_profile_element and holds element IDs and further columns are named machine_0, machine_1, machine_2 etc., holding the values for each load profile element. You can then get the load profile elements for a single machine like this:
List<Double> dblReturnLPEs = main.selectValues(
"SELECT machine_" + oMachine.getIndex()
+ " FROM machine_load_profile"
+ " ORDER BY load_profile_element;"
);
and either iterate that list or convert them into an array:
dblLPEValues = dblReturnLPEs.stream().mapToDouble(Double::doubleValue).toArray();
and iterate that.
Of course you could also use the opposite orientation for your columns and rows as well, using WHERE, I simply had a handy example oriented in this manner.

odoo domain search "id in ids"

I have a model B with a Many2many field referencing model A.
Now given an id of model A, I try to get the records of B that reference it.
Is this possible with Odoo search domains? Is it possible doing some SQL query?
Example
class A(models.Model):
_name='module.a'
class B(models.Model):
_name='module.b'
a_ids = fields.Many2many('m.a')
I try to do something like
a_id = 5
filtered_b_ids = self.env['module.b'].search([(a_id,'in','a_ids')])
However, this is not a valid search in Odoo. Is there a way to let the database do the search?
So far I fetch all records of B from the database and filter them afterward:
all_b_ids = self.env['module.b'].search([])
filtered_b_ids = [b_id for b_id in b_ids if a_id in b_id.a_ids]
However, I want to avoid fetching not needed records and would like to let the database do the filtering.
You should create the equivalent Many2many field in A.
class A(models.Model):
_name='module.a'
b_ids = fields.Many2many('module.b', 'rel_a_b', 'a_id', 'b_id')
class B(models.Model):
_name='module.b'
a_ids = fields.Many2many('module.a', 'rel_a_b', 'b_id', 'a_id')
In the field definition, the second argument is the name of the association table, and the two next ones are the name of the columns referencing the records of the two models. It's explained in the official ORM documentation.
Then you just have to do my_a_record.b_ids.
If you prefer doing an SQL request because you don't want to add a python field to A, you can do so by calling self.env.cr.execute("select id from module_b b, ...").fetchall(). In your request you have to join the association table (so you need to specify a name for it and its columns, as described in my code extract, otherwise they are automatically named by Odoo and I don't know the rule).
I think it's still possible to use search domains without the field in A but it's tricky. You can try search([('a_ids','in', [a_id])]) but I'm really not sure.
class A(models.Model):
_name='module.a'
class B(models.Model):
_name='module.b'
a_ids = fields.Many2many('module.a')
Now you want to search a_id = 5
To do so simply use browse or search ORM methods i.e,
a_id = 5
filtered_b_ids = self.env['module.b'].search([(a_id,'in',self.a_ids.ids)])
or
a_id = 5
filtered_b_ids = self.env['module.a'].search([(a_id)])

Django Query Optimisation

I am working currently on telecom analytics project and newbie in query optimisation. To show result in browser it takes a full minute while just 45,000 records are to be accessed. Could you please suggest on ways to reduce time for showing results.
I wrote following query to find call-duration of a person of age-group:
sigma=0
popn=len(Demo.objects.filter(age_group=age))
card_list=[Demo.objects.filter(age_group=age)[i].card_no
for i in range(popn)]
for card in card_list:
dic=Fact_table.objects.filter(card_no=card.aggregate(Sum('duration'))
sigma+=dic['duration__sum']
avgDur=sigma/popn
Above code is within for loop to iterate over age-groups.
Model is as follows:
class Demo(models.Model):
card_no=models.CharField(max_length=20,primary_key=True)
gender=models.IntegerField()
age=models.IntegerField()
age_group=models.IntegerField()
class Fact_table(models.Model):
pri_key=models.BigIntegerField(primary_key=True)
card_no=models.CharField(max_length=20)
duration=models.IntegerField()
time_8bit=models.CharField(max_length=8)
time_of_day=models.IntegerField()
isBusinessHr=models.IntegerField()
Day_of_week=models.IntegerField()
Day=models.IntegerField()
Thanks
Try that:
sigma=0
demo_by_age = Demo.objects.filter(age_group=age);
popn=demo_by_age.count() #One
card_list = demo_by_age.values_list('card_no', flat=True) # Two
dic = Fact_table.objects.filter(card_no__in=card_list).aggregate(Sum('duration') #Three
sigma = dic['duration__sum']
avgDur=sigma/popn
A statement like card_list=[Demo.objects.filter(age_group=age)[i].card_no for i in range(popn)] will generate popn seperate queries and database hits. The query in the for-loop will also hit the database popn times. As a general rule, you should try to minimize the amount of queries you use, and you should only select the records you need.
With a few adjustments to your code this can be done in just one query.
There's generally no need to manually specify a primary_key, and in all but some very specific cases it's even better not to define any. Django automatically adds an indexed, auto-incremental primary key field. If you need the card_no field as a unique field, and you need to find rows based on this field, use this:
class Demo(models.Model):
card_no = models.SlugField(max_length=20, unique=True)
...
SlugField automatically adds a database index to the column, essentially making selections by this field as fast as when it is a primary key. This still allows other ways to access the table, e.g. foreign keys (as I'll explain in my next point), to use the (slightly) faster integer field specified by Django, and will ease the use of the model in Django.
If you need to relate an object to an object in another table, use models.ForeignKey. Django gives you a whole set of new functionality that not only makes it easier to use the models, it also makes a lot of queries faster by using JOIN clauses in the SQL query. So for you example:
class Fact_table(models.Model):
card = models.ForeignKey(Demo, related_name='facts')
...
The related_name fields allows you to access all Fact_table objects related to a Demo instance by using instance.facts in Django. (See https://docs.djangoproject.com/en/dev/ref/models/fields/#module-django.db.models.fields.related)
With these two changes, your query (including the loop over the different age_groups) can be changed into a blazing-fast one-hit query giving you the average duration of calls made by each age_group:
age_groups = Demo.objects.values('age_group').annotate(duration_avg=Avg('facts__duration'))
for group in age_groups:
print "Age group: %s - Average duration: %s" % group['age_group'], group['duration_avg']
.values('age_group') selects just the age_group field from the Demo's database table. .annotate(duration_avg=Avg('facts__duration')) takes every unique result from values (thus each unique age_group), and for each unique result will fetch all Fact_table objects related to any Demo object within that age_group, and calculate the average of all the duration fields - all in a single query.

Searching for and matching elements across arrays

I have two tables.
In one table there are two columns, one has the ID and the other the abstracts of a document about 300-500 words long. There are about 500 rows.
The other table has only one column and >18000 rows. Each cell of that column contains a distinct acronym such as NGF, EPO, TPO etc.
I am interested in a script that will scan each abstract of the table 1 and identify one or more of the acronyms present in it, which are also present in table 2.
Finally the program will create a separate table where the first column contains the content of the first column of the table 1 (i.e. ID) and the acronyms found in the document associated with that ID.
Can some one with expertise in Python, Perl or any other scripting language help?
It seems to me that you are trying to join the two tables where the acronym appears in the abstract. ie (pseudo SQL):
SELECT acronym.id, document.id
FROM acronym, document
WHERE acronym.value IN explode(documents.abstract)
Given the desired semantics you can use the most straight forward approach:
acronyms = ['ABC', ...]
documents = [(0, "Document zeros discusses the value of ABC in the context of..."), ...]
joins = []
for id, abstract in documents:
for word in abstract.split():
try:
index = acronyms.index(word)
joins.append((id, index))
except ValueError:
pass # word not an acronym
This is a straightforward implementation; however, it has n cubed running time as acronyms.index performs a linear search (of our largest array, no less). We can improve the algorithm by first building a hash index of the acronyms:
acronyms = ['ABC', ...]
documents = [(0, "Document zeros discusses the value of ABC in the context of..."), ...]
index = dict((acronym, idx) for idx, acronym in enumberate(acronyms))
joins = []
for id, abstract in documents:
for word in abstract.split():
try
joins.append((id, index[word]))
except KeyError:
pass # word not an acronym
Of course, you might want to consider using an actual database. That way you won't have to implement your joins by hand.
Thanks a lot for the quick response.
I assume the pseudo SQL solution is for MYSQL etc. However it did not work in Microsoft ACCESS.
the second and the third are for Python I assume. Can I feed acronym and document as input files?
babru
It didn't work in Access because tables are accessed differently (e.g. acronym.[id])

Resources