Create a model in app1 when app2 loads - django-models

Imagine I have an app1 called 'pricelists' and app2 called 'marketplaces'.
In the marketplaces app, I want to auto create a pricelists.PriceList if it is not yet present. This PriceList is to be used in signals to auto-populate the pricelist depending on a few factors.
Currently, I use something like this in my signals:
price_list, _ = PriceList.objects.get_or_create(
currency='EUR', is_default=False, customer_type='CONS',
remarks='Marketplace')
I don't like this approach since it's repeated a number of times and plainly want the pricelist to be created for sure.
My question. How can I get_or_create a model-object in another app every time django restarts?
Solution
In your app.__init__.py manually define your AppConfig. It doesn't seem to get detected in django 1.10
default_app_config = 'marketplaces.apps.MarketPlacesConfig'
Override your appconfig ready method:
class MarketPlacesConfig(AppConfig):
name = 'marketplaces'
def ready(self):
from pricelists.models import PriceList, PriceListItem
price_list_marketplaces, _ = PriceList.objects.get_or_create(
**settings.MARKETPLACES['price_list']

AppConfig.ready() with django.db.models.signals is the only way I can think of.

Related

Is it possible to bulk load an NDB child Entity in GAE?

At some point in the future I may need to bulk load migration data (i.e. from a CSV). Has anyone had exceptions raised doing the following? Also is there any change in behaviour if the ndb.put_multi() function is used?
from google.appengine.ext import ndb
while True:
if not id:
break
id, name = read_csv_row(readline())
x = X(parent=ndb.Key('Y','static_id')
x.id, x.name = id, name
x.put()
class X(ndb.Model):
id = StringProperty()
name = StringProperty()
class Y(ndb.Model):
pass
def read_csv_row(line):
"""returns tuple"""
From my research and thanks to comments it seems that the code above (where it made into valid code) create problems which would eventually lead to google.appengine.api.datastore_errors.Timeout exceptions being thrown.
See another question:
Datastore write limit tests - trying to break app engine, but it won´t break ;)
The best suggestion I have so far is to use a Task Queue to to rate limit this. More information on:
blog.notdot.net/tag/deferred

How to seed google's NDB (app engine storage)

I am a NDB user and this object database is quite cool. But how can I seed specific default values directly after deployment? Is there some predefined functionality or standardized way for database seeding?
As example:
I have the following ndb.Model and want some sort of "existing default parent".
Category(ndb.Model):
name = ndb.StringProperty(required=True)
parent = ndb.KeyProperty(kind='Category',required=True,
default=<KeyOfRootCategory>)
Where to put the following seeding values?
main_category = Category(name="all", parent=None) #this is the root category
main_category.put()
Doesn't look like there are dedicated 'post-deployment' hooks for that. I'd simply put some code into the main handler script (the one that contains 'webapp2.WSGIApplication(...)') checking if the root category exists already and create it if not. Alternatively this could be part of some handler action.
Why not create a simple seeding handler to call after deployment (e.g. /seeding/example)? The way I see it you only have to seed once so there's no need for some sort of hook.
seed.py:
class ExampleHandler(webapp2.RequestHandler):
def get(self):
# Do your thing
# Maybe use "get_or_insert()". See [1]
return
app = webapp2.WSGIApplication(
[
('/example', ExampleHandler),
],
debug=True
)
Then in your app.yaml:
- url: /seeding/.*
script: seed.app
login: admin
The last line is crucial. It protects your seeding script from unauthorized access (see [2]).
[1] https://developers.google.com/appengine/docs/python/ndb/modelclass#Model_get_or_insert
[2] https://developers.google.com/appengine/docs/python/config/appconfig#Python_app_yaml_Requiring_login_or_administrator_status
I think I understand what you are asking.
You can create a parent key without having to create the entity. That will define your entity group.
Alternately it doesn't need a parent, but will be the parent of any child. Any entity with out a parent defined in the key becomes the root of it's own entity group, and that entity group can have 1 or more members (i.e itself and any children.)

cakePHP get a variable from each Model and Controller

I got a question. I have a db table with settings (id, name).
If I read them from the db
$settings = $this->Setting->find('list');
How can I do this in the AppController or something like that to access from each Controller and Model?
Hope someone can help me.
Thanks
Explanation:
I would assume you're looking for something like below (Obviously you'll want to tweak it per your own application, but - it's the idea).
In the app controller, it
finds the settings from the table
repeats through each and puts each one into a "Configure" variable
Code:
/**
* Read settings from DB and populate them in constants
*/
function fetchSettings(){
$this->loadModel('Setting');
$settings = $this->Setting->findAll();
foreach($settings as $settingsData) {
$value = $settingsData['Setting']['default_value'];
//note: can't check for !empty because some values are 0 (zero)
if(isset($settingsData['Setting']['value'])
&& $settingsData['Setting']['value'] !== null
&& $settingsData['Setting']['value'] !== '') {
$value = $settingsData['Setting']['value'];
}
Configure::write($settingsData['Setting']['key'], $value);
}
}
Then, you can access them anywhere in your app via Configure::read('myVar');
A warning from the CakePHP book about Configure variables. (I think they're fine to use in this case, but - something to keep in mind):
CakePHP’s Configure class can be used to store and retrieve
application or runtime specific values. Be careful, this class allows
you to store anything in it, then use it in any other part of your
code: a sure temptation to break the MVC pattern CakePHP was designed
for. The main goal of Configure class is to keep centralized variables
that can be shared between many objects. Remember to try to live by
“convention over configuration” and you won’t end up breaking the MVC
structure we’ve set in place.

parallel code execution python2.7 ndb

in my app i for one of the handler i need to get a bunch of entities and execute a function for each one of them.
i have the keys of all the enities i need. after fetching them i need to execute 1 or 2 instance methods for each one of them and this slows my app down quite a bit. doing this for 100 entities takes around 10 seconds which is way to slow.
im trying to find a way to get the entities and execute those functions in parallel to save time but im not really sure which way is the best.
i tried the _post_get_hook but the i have a future object and need to call get_result() and execute the function in the hook which works kind of ok in the sdk but gets a lot of 'maximum recursion depth exceeded while calling a Python objec' but i can't really undestand why and the error message is not really elaborate.
is the Pipeline api or ndb.Tasklets what im searching for?
atm im going by trial and error but i would be happy if someone could lead me to the right direction.
EDIT
my code is something similar to a filesystem, every folder contains other folders and files. The path of the Collections set on another entity so to serialize a collection entity i need to get the referenced entity and get the path. On a Collection the serialized_assets() function is slower the more entities it contains. If i could execute a serialize function for each contained asset side by side it would speed things up quite a bit.
class Index(ndb.Model):
path = ndb.StringProperty()
class Folder(ndb.Model):
label = ndb.StringProperty()
index = ndb.KeyProperty()
# contents is a list of keys of contaied Folders and Files
contents = ndb.StringProperty(repeated=True)
def serialized_assets(self):
assets = ndb.get_multi(self.contents)
serialized_assets = []
for a in assets:
kind = a._get_kind()
assetdict = a.to_dict()
if kind == 'Collection':
assetdict['path'] = asset.path
# other operations ...
elif kind == 'File':
assetdict['another_prop'] = asset.another_property
# ...
serialized_assets.append(assetdict)
return serialized_assets
#property
def path(self):
return self.index.get().path
class File(ndb.Model):
filename = ndb.StringProperty()
# other properties....
#property
def another_property(self):
# compute something here
return computed_property
EDIT2:
#ndb.tasklet
def serialized_assets(self, keys=None):
assets = yield ndb.get_multi_async(keys)
raise ndb.Return([asset.serialized for asset in assets])
is this tasklet code ok?
Since most of the execution time of your functions are spent waiting for RPCs, NDB's async and tasklet support is your best bet. That's described in some detail here. The simplest usage for your requirements is probably to use the ndb.map function, like this (from the docs):
#ndb.tasklet
def callback(msg):
acct = yield ndb.get_async(msg.author)
raise tasklet.Return('On %s, %s wrote:\n%s' % (msg.when, acct.nick(), msg.body))
qry = Messages.query().order(-Message.when)
outputs = qry.map(callback, limit=20)
for output in outputs:
print output
The callback function is called for each entity returned by the query, and it can do whatever operations it needs (using _async methods and yield to do them asynchronously), returning the result when it's done. Because the callback is a tasklet, and uses yield to make the asynchronous calls, NDB can run multiple instances of it in parallel, and even batch up some operations.
The pipeline API is overkill for what you want to do. Is there any reason why you couldn't just use a taskqueue?
Use the initial request to get all of the entity keys, and then enqueue a task for each key having the task execute the 2 functions per-entity. The concurrency will be based then on the number of concurrent requests as configured for that taskqueue.

What approach is best for mapping a legacy application tables named after years in Django?

Better see what the table names look like:
2009_articles
2010_articles
2011_articles
2009_customers
2010_customers
2011_customers
2009_invoices
2010_invoices
2011_invoices
Developers have simulated some kind of partitioning (long before mysql supported it) but now it breaks any try to make a quick frontend so customers can see their invoices and switch years.
After a couple on months I have the following results:
Changing Invoice._meta.db_table is useless cause any other relation deduced by the ORM will be wrong
models.py cannot get request variables
Option a:
Use abstract models so Invoice10 adds meta.db_table=2010 and inherits from Invoice model, and Invoice11 adds meta.db_table=2011, Not DRY although the app shouldn't need to support more than two or three years at the same time, but I will have to still check if
Option b:
Duplicate models and change imports on my views:
if year == 2010:
from models import Article10 as Article
and so on
Option c:
Dynamic models as referred to in several places on the net, but why have a 100% dynamic model when I just need a 1% part of the model dynamic?
Option d:
Wow, just going crazy after frustration. What about multiple database settings and use a router?
Any help will be much appreciated.
Option e: create new relevant models/database structure, and do an import of the old data in the new structure.
Ugly, but must inform.
I achieved something useful by using Django signal: class_prepared and ThreadLocal to get session:
from django.db.models.signals import class_prepared
from myapp.middlewares import ThreadLocal
apps = ('oneapp', 'otherapp',)
def add_table_prefix(sender, *args, **kwargs):
if sender._meta.app_label in apps:
request = ThreadLocal.get_current_request()
try:
year = request.session['current_year']
except:
year = settings.CURRENT_YEAR
prefix = getattr(settings, 'TABLE_PREFIX', '')
sender._meta.db_table = ('%s_%s_%s_%s') % (prefix, year, sender._meta.coolgest_prefix, sender._meta.db_table)
class_prepared.connect(add_table_prefix)
So is one model class mapping several identical database tables (invoices_01_2013, invoices_02_2013, ...) depending of what month and year the application user is browsing.
Working fine in production.

Resources