Odoo Create Read Only Model without Saving It in Database - database

I want to create a computed field for One2many field in contact module (res.partner). This is my field declaration
payments = fields.One2many('payment', 'partner_id')
I add a compute attribute, so payments field calculated automatically
payments = fields.One2many('payment', 'partner_id', compute='_compute_payments')
This is my computed method
def _compute_payments(self):
for rec in self:
... # Do some query on database
... # Do some query on database
rec.payments = self.env['payments'].create({
'partner_id': rec.id,
... # Another field with value from query
... # Another field with value from query
})
Everything is okay, but when I check the database, the record is being created over and over again. I don't want to save the records on the database, just like another field with compute attribute didn't stored on the database in default mode.

Related

odoo domain search "id in ids"

I have a model B with a Many2many field referencing model A.
Now given an id of model A, I try to get the records of B that reference it.
Is this possible with Odoo search domains? Is it possible doing some SQL query?
Example
class A(models.Model):
_name='module.a'
class B(models.Model):
_name='module.b'
a_ids = fields.Many2many('m.a')
I try to do something like
a_id = 5
filtered_b_ids = self.env['module.b'].search([(a_id,'in','a_ids')])
However, this is not a valid search in Odoo. Is there a way to let the database do the search?
So far I fetch all records of B from the database and filter them afterward:
all_b_ids = self.env['module.b'].search([])
filtered_b_ids = [b_id for b_id in b_ids if a_id in b_id.a_ids]
However, I want to avoid fetching not needed records and would like to let the database do the filtering.
You should create the equivalent Many2many field in A.
class A(models.Model):
_name='module.a'
b_ids = fields.Many2many('module.b', 'rel_a_b', 'a_id', 'b_id')
class B(models.Model):
_name='module.b'
a_ids = fields.Many2many('module.a', 'rel_a_b', 'b_id', 'a_id')
In the field definition, the second argument is the name of the association table, and the two next ones are the name of the columns referencing the records of the two models. It's explained in the official ORM documentation.
Then you just have to do my_a_record.b_ids.
If you prefer doing an SQL request because you don't want to add a python field to A, you can do so by calling self.env.cr.execute("select id from module_b b, ...").fetchall(). In your request you have to join the association table (so you need to specify a name for it and its columns, as described in my code extract, otherwise they are automatically named by Odoo and I don't know the rule).
I think it's still possible to use search domains without the field in A but it's tricky. You can try search([('a_ids','in', [a_id])]) but I'm really not sure.
class A(models.Model):
_name='module.a'
class B(models.Model):
_name='module.b'
a_ids = fields.Many2many('module.a')
Now you want to search a_id = 5
To do so simply use browse or search ORM methods i.e,
a_id = 5
filtered_b_ids = self.env['module.b'].search([(a_id,'in',self.a_ids.ids)])
or
a_id = 5
filtered_b_ids = self.env['module.a'].search([(a_id)])

Pentaho salesforce upsert using externalID

I am trying to insert data in salesforce using upsert, for one field i am using the ExternalId field , i have tried many combinations but it fails...I get the error : the syntax should be object:externalId/lookupField
Any idea what is the exact syntax? Keep in mind i am inserting in table Account and the externalId field refers to Account also
object:externalId/lookupField is not very clear is it. There's a comment hidden away in the Pentaho code:
// We use an external key
// the structure should be like this :
// object:externalId/lookupField
// where
// object is the type of the object
// externalId is the name of the field in the object to resolve the value
// lookupField is the name of the field in the current object to update (is the "__r" version)
Lets say you're populating a Salesforce Object Foo__c, which has a Lookup field to Contact called Contact__c. The 'relationship name' for that lookup field would then be Contact__r.
On Contact lets say you have added an External ID called Legacy_Id__c and thats what you want to use when populating Foo__c.
What Pentaho would want in the Module Field column would then be:
Contact:Legacy_Id__c\Contact__r
The bit to the left of the slash is telling Pentaho which object/external id to map to. To the right of the slash, its telling Pentaho which lookup/relationship on Foo__c to fill in.

can I use data transformers to combine fields in forms in symfony2

Is it possible to use Data transformers to merge (n) fields in a form into one persistable field?
If it's possible, how to do it? The cookbook only gives an example to transform one piece of data into another type, but I need to be able to dump N fields into only one for persistence. So if I'm showing 6 fields in the form, only 3 are real fields in DB table, first and second fields are to persisted as is, but the remaining 4 fields are to be store in the third table column.
You should do it via FormEvent::POST_SUBMIT event.
http://symfony.com/doc/current/cookbook/form/dynamic_form_modification.html
Basically, something like this:
$builder->addEventListener(FormEvents::POST_SUBMIT, function(FormEvent $event) {
$form = $event->getForm();
// entity or array
$data = $event->getData();
// get data directly from form
$concatData = $form->get('non_mapped_field1_1')->getData() . ',' . $form->get('non_mapped_field1_2')->getData();
// assumig that data is entity class
$data->setSomeField($concatData);
}
);

Syncronizing data between two django servers

I have a central Django server containing all of my information in a database. I want to have a second Django server that contains a subset of that information in a second database. I need a bulletproof way to selectively sync data between the two.
The secondary Django will need to pull its subset of data from the primary at certain times. The subset will have to be filtered by certain fields.
The secondary Django will have to occasionally push its data to the primary.
Ideally, the two-way sync would keep the most recently modified objects for each model.
I was thinking something along the lines of having using TimeStampedModel (from django-extensions) or adding my own DateTimeField(auto_now=True) so that every object stores its last modified time. Then, maybe a mechanism to dump the data from one DB and load it in to the other such that only the more recently modified objects are kept.
Possibilities I am considering are django's dumpdata, django-extensions dumpscript, django-test-utils makefixture or maybe django-fixture magic. There's a lot to think about, so I'm not sure which road to proceed down.
Here is my solution, which fits all of my requirements:
Implement natural keys and unique constraints on all models
Allows for a unique way to refer to each object without using primary key IDs
Sublcass each model from TimeStampedModel in django-extensions
Adds automatically updated created and modified fields
Create a Django management command for exporting, which filters a subset of data and serializes it with natural keys
baz = Baz.objects.filter(foo=bar)
yaz = Yaz.objects.filter(foo=bar)
objects = [baz, yaz]
flat_objects = list(itertools.chain.from_iterable(objects))
data = serializers.serialize("json", flat_objects, indent=3, use_natural_keys=True)
print(data)
Create a Django management command for importing, which reads in the serialized file and iterates through the objects as follows:
If the object does not exist in the database (by natural key), create it
If the object exists, check the modified timestamps
If the imported object is newer, update the fields
If the imported object is older, do not update (but print a warning)
Code sample:
# Open the file
with open(args[0]) as data_file:
json_str = data_file.read()
# Deserialize and iterate
for obj in serializers.deserialize("json", json_str, indent=3, use_natural_keys=True):
# Get model info
model_class = obj.object.__class__
natural_key = obj.object.natural_key()
manager = model_class._default_manager
# Delete PK value
obj.object.pk = None
try:
# Get the existing object
existing_obj = model_class.objects.get_by_natural_key(*natural_key)
# Check the timestamps
date_existing = existing_obj.modified
date_imported = obj.object.modified
if date_imported > date_existing:
# Update fields
for field in obj.object._meta.fields:
if field.editable and not field.primary_key:
imported_val = getattr(obj.object, field.name)
existing_val = getattr(existing_obj, field.name)
if existing_val != imported_val:
setattr(existing_obj, field.name, imported_val)
except ObjectDoesNotExist:
obj.save()
The workflow for this is to first call python manage.py exportTool > data.json, then on another django instance (or the same), call python manage.py importTool data.json.

Best database design (model) for user tables

I'm developping a web application using google appengine and django, but I think my problem is more general.
The users have the possibility to create tables, look: tables are not represented as TABLES in the database. I give you an example:
First form:
Name of the the table: __________
First column name: __________
Second column name: _________
...
The number of columns is not fixed, but there is a maximum (100 for example). The type in every columns is the same.
Second form (after choosing a particular table the user can fill the table):
column_name1: _____________
column_name2: _____________
....
I'm using this solution, but it's wrong:
class Table(db.Model):
name = db.StringProperty(required = True)
class Column(db.Model):
name = db.StringProperty(required = True)
number = db.IntegerProperty()
table = db.ReferenceProperty(table, collection_name="columns")
class Value(db.Model):
time = db.TimeProperty()
column = db.ReferenceProperty(Column, collection_name="values")
when I want to list a table I take its columns and from every columns I take their values:
data = []
for column in data.columns:
column_data = []
for value in column.values:
column_data.append(value.time)
data.append(column_data)
data = zip(*data)
I think that the problem is the order of the values, because it is not true that the order for one column is the same for the others. I'm waiting for this bug (but until now I never seen it):
Table as I want: as I will got:
a z c a e c
d e f d h f
g h i g z i
Better solutions? Maybe using ListProperty?
Here's a data model that might do the trick for you:
class Table(db.Model):
name = db.StringProperty(required=True)
owner = db.UserProperty()
column_names = db.StringListProperty()
class Row(db.Model):
values = db.ListProperty(yourtype)
table = db.ReferenceProperty(Table, collection_name='rows')
My reasoning:
You don't really need a separate entity to store column names. Since all columns are of the same data type, you only need to store the name, and the fact that they are stored in a list gives you an implicit order number.
By storing the values in a list in the Row entity, you can use an index into the column_names property to find the matching value in the values property.
By storing all of the values for a row together in a single entity, there is no possibility of values appearing out of their correct order.
Caveat emptor:
This model will not work well if the table can have columns added to it after it has been populated with data. To make that possible, every time that a column is added, every existing row belonging to that table would have to have a value appended to its values list. If it were possible to efficiently store dictionaries in the datastore, this would not be a problem, but list can really only be appended to.
Alternatively, you could use Expando...
Another possibility is that you could define the Row model as an Expando, which allows you to dynamically create properties on an entity. You could set column values only for the columns that have values in them, and that you could also add columns to the table after it has data in it and not break anything:
class Row(db.Expando):
table = db.ReferenceProperty(Table, collection_name='rows')
#staticmethod
def __name_for_column_index(index):
return "column_%d" % index
def __getitem__(self, key):
# Allows one to get at the columns of Row entities with
# subscript syntax:
# first_row = Row.get()
# col1 = first_row[1]
# col12 = first_row[12]
value = None
try:
value = self.__dict__[Row.__name_for_column_index]
catch KeyError:
# The given column is not defined for this Row
pass
return value
def __setitem__(self, key, value):
# Allows one to set the columns of Row entities with
# subscript syntax:
# first_row = Row.get()
# first_row[5] = "New values for column 5"
self.__dict__[Row.__name_for_column_index] = value
# In order to allow efficient multiple column changes,
# the put() can go somewhere else.
self.put()
Why don't you add an IntegerProperty to Value for rowNumber and increment it every time you add a new row of values and then you can reconstruct the table by sorting by rowNumber.
You're going to make life very hard for yourself unless your user's 'tables' are actually stored as real tables in a relational database. Find some way of actually creating tables and use the power of an RDBMS, or you're reinventing a very complex and sophisticated wheel.
This is the conceptual idea I would use:
I would create two classes for the data-store:
table this would serve as a
dictionary, storing the structure of
the pseudo-tables your app would
create. it would have two fields :
table_name, column_name,
column_order . where column_order
would give the position of the
column within the table
data
this would store the actual data in
the pseudo-tables. it would have
four fields : row_id, table_name,
column_name , column_data. row_id
would be the same for data
pertaining to the same row and would
be unique for data across the
various pseudo-tables.
Put the data in a LongBlob.
The power of a database is to be able to search and organise data so that you are able to get only the part you want for performances and simplicity issues : you don't want the whole database, you just want a part of it and want it fast. But from what I understand, when you retrieve a user's data, you retrieve it all and display it. So you don't need to sotre the data in a normal "database" way.
What I would suggest is to simply format and store the whole data from a single user in a single column with a suitable type (LongBlob for example). The format would be an object with a list of columns and rows of type. And you define the object in whatever language you use to communicate with the database.
The columns in your (real) database would be : User int, TableNo int, Table Longblob.
If user8 has 3 tables, you will have the following rows :
8, 1, objectcontaintingtable1;
8, 2, objectcontaintingtable2;
8, 3, objectcontaintingtable3;

Resources