How can I dynamically set the SQLite database file in Peewee? - peewee

I'm using Peewee for a project I'm working on, and I'm trying to figure out how to dynamically set the database so that I can use one for production and one for testing. All the examples I've seen have the following line outside of any class:
database = SqliteDatabase(DATABASE)
which I find strange, since I would think that you would want that to be in a class so you could pass in different database paths. Any suggestions for choosing one database for prod and another for testing?

I just came across a similar issue, here's how I solved it to define the path to the database at run time:
Models file:
import peewee
database = peewee.SqliteDatabase(None) # Defer initialization
class SomeData(peewee.Model):
somefield = peewee.CharField()
class Meta:
database = database
Then in the class that uses the database:
from models import SomeData
class DatabaseUser:
def __init__(self, db_path):
database.init(db_path)

The database is typically declared at module scope because the model classes are defined there, and they typically depend on the database.
However, you can defer the initialization of your database using these techniques:
Run time db config: http://docs.peewee-orm.com/en/latest/peewee/database.html#run-time-database-configuration
Using a Proxy to define the DB arbitrarily: http://docs.peewee-orm.com/en/latest/peewee/database.html#dynamically-defining-a-database
The first is useful if you are using the same database class. You really only need the Proxy when you're using Sqlite for dev and Postgres for prod, e.g.

Related

Why use apps.get_model() when creating a data migration?

As per the django docs when creating django migrations we should use apps.get_model() rather than importing the models and using them.
Why does a data migration have to use the historical version of a model rather than the latest one?(The historical versions of the model will not be in use anyways right?)
It uses the historical versions of the model so that it won't have problems trying to access fields that may no longer exist in the code base when you run your migrations against another database.
If you removed some field from your model and then wanted to run your migrations on some new database, and you were importing your models directly, you can expect your migrations would complain trying to use a field that doesn't exist. When using apps.get_model(...) Django will try to be smart about it and use the definitions of migrations.AddField(...) from your migrations files to give you the correct version of your model at that point in time.
This is also why Django says to be careful about using custom Model/Model Manager methods in your data migrations because I don't believe they can recreate these methods from the migrations history, or the behaviour can change over time and your migrations wouldn't be consistent.
Consider this model:
class A(models.Model):
field1 = models.PositiveIntegerField()
field2 = models.PositiveIntegerField()
Your migration history knows about these two fields and any further migration will consider this model state and will make changes to this model state.
Now suppose, you remove field1 and your model becomes:
class A(models.Model):
field2 = models.PositiveIntegerField()
And in the migration, you try to use field1, django should know that field1 existed. Hence when we use apps.get_model(), it helps django use the previous migrations history and infer about field1. Otherwise you will get an error.

Setting up a CakePHP test database by using app/Config/Schema/schema.php

I'm setting up self-hosted continuous integration using buildbox.io. I need to create the tables and columns in my test database.
On my own computer, I've been using public $import = 'MyTable'; for every fixture, to copy the table defenitions from the $default database connection in database.php. It works well, since my development database is always up to date with the latest migrations.
Also, it seems like a massive pain to do it the other way, where you'd have to manually keep your database field definitions up to date in your fixtures, each time you make changes to your database. This seems especially redundant given the list of fields will already be up to date in app/Config/Schema/schema.php
On the server, using public $import = 'MyTable'; won't work. Even if I did want to make the staging database my $default config when running tests, the staging database can't be relied upon to be up to date at all times.
So, my question is, how can I do it? Is there a way to tell Cake to use the definitions in schema.php for building its test database from fixtures? Or is the only way for me to manually add field definitions in all my fixtures? (that seems like a massive pain!)
What I do for fixtures is just update a set of sql statements that contain test data. The file is something like schema_MD5HASH.sql.
When my tests run, it gets the sql hash of the schema.php file for my app and uses that to run the associated schema_MD5HASH.sql. If the file doesn't exist, the tests fail, and then it's just one extra build to remember to build my fixtures.
Something else I've been playing with is:
annotating the schema with extra data for each field - something akin to a field_type
loading up the schema file in a fixture and reading in the specific table
parsing the field_type and generating fake data according to a library like Faker
This way is sort of weird, and you have to be careful that a schema dump doesn't override your annotations, but it works pretty okay.
OK, I've got a solution I'm happy with, that requires minimal changes to my code.
For the Buildbox CI environment, I was previously using only one database - the test database. I've now added another, so I've got a test and default database (along with $test and $default database configs).
During the Buildbox bootstrap process (where it runs bootstrap.sh - a file that sits on your CI server, outside your app) I call Console/cake schema create, which will populate the default database from schema.php.
Now, I can run my tests as normal, and they will copy the table definitions from the default database, as my $import setting recommends.
So, the default database never contains any data - it only exists so it can be created from schema.php, and can therefore be used to import table definitions during my tests.
Here's the Cake-sepcific lines in my Buildbox bootstrap.sh:
echo -e "--- Installing plugins via composer"
buildbox-run "composer install"
echo -e "--- Setting up database"
buildbox-run "cp ./app/Config/database.buildbox.php ./app/Config/database.php"
# create the default database, so that we can use $import as a means of generating fixture data on the test database.
# say yes at all prompts: http://askubuntu.com/questions/338857/automatically-enter-input-in-command-line
buildbox-run "yes | ./app/Console/cake schema create"

JPA + Spring how to initialize database with custom tables and data

In my application I use Spring context and JPA. I have some set of entities annotated with #Entity and tables for them are created automatically during system startup. Recently I started using Spring ACL, so I have to have the following additional DB schema and I don't want these tables to be mapped to the entities (simply I don't need them to, because Spring ACL manages them independently).
I want to automatically insert e.g. admin user account into the
User's entity table. How to do that properly?
I want to initialize Spring ACL custom tables during system startup, but the SQL script file does not seem to be good solution because if I use different database for production and functional testing, the different SQL dialect does not allow me to execute script properly on both engines (e.g. when using MySQL and HSQL).
At first I tried to use ServletListener that during servlet initialization check the db and adds the necessary data and schema, but this does not work for integration tests (because there are no servlet involved at all).
What I want to achieve is the Spring bean (?) that is launched after the JPA has initialized all entity tables, insert all startup data using injected DAOs and somehow creates the Spring ACL schema. Then - I want the bean to be removed from IoC (because I simly don't need it anymore). Is it possible?
Or is there any better way of doing this?
The default JPA allows you to add an SQL script upon loading the persistence.xml:
http://docs.oracle.com/javaee/7/tutorial/persistence-intro005.htm
Add this property to your persistence.xml file:
<property name="javax.persistence.sql-load-script-source"
value="META-INF/sql/data.sql" />
And fill the data.sql file with your default values.
If you are using EclipseLink you could use a SessionEventListener to execute code after JPA login. You could perform your schema creation and setup in a postLogin event.
You could use the Schema Framework in EclipseLink (org.eclipse.persistence.tools.schemaframework) to create tables and DDL in a database platform independent way. (TableDefinition, SchemaManager classes)
I use PostConstruct annotation to invoke initialize methods.
As document described: The PostConstruct annotation is used on a method that needs to be executed after dependency injection is done to perform any initialization. You may simply add a spring bin with methods has #PostConstruct annotation on it, the methods would be executed after tables are created(or we can say, they are executed after other beans are ready).
Code sample:
#Component
public class EntityLoader {
#Autowired
UserRepository userRepo;
#PostConstruct
public void initApiUserData() {
User u = new User();
// set user properties here
userRepo.save(u);
}
}
If you use hibernate, then create a sql script import.sql in the class path root. Hibernat will execute it on startup. -- This worked in former hibernate version. In the docu of the current version 4.1 I have not found any hint of this feature.
But Hibernate 4.1 has an other feature
Property: hibernate.hbm2ddl.import_files
Comma-separated names of the optional files containing SQL DML statements executed during the SessionFactory creation. This is useful for testing or demoing: by adding INSERT statements for example you can populate your database with a minimal set of data when it is deployed.
File order matters, the statements of a give file are executed before the statements of the following files. These statements are only executed if the schema is created ie if hibernate.hbm2ddl.auto is set to create or create-drop.
e.g. /humans.sql,/dogs.sql
You could try using flyway to
create the tables using the SQL DDL as an SQL migration and
possibly put some data in the tables using either SQL or Java based migrations. You might want to use the latter if in need of environment or other info.
It's a lot easier than it sounds and you will also end up with flyway itself as a bonus, if not a must.

Access to database and importing data to Magento

I am considering building some sort of mechanism, that would import data to Magento database.
However, as I read in documentation, it was recommended to use models available in Magento by default if possible.
My question would be, whether it is possible to use model approach, without creating Magento module and then execute this code from command line?
Or the best idea would be to use module, but what if I would intend to build two import mechanisms, where one uses custom table (perhaps I may need one more table for one customization, but this table would stand apart) and another uses tables and models available in Magento?
To bootstrap Magento and use it from command line create a php file starting with:
<?php
require_once '../Mage.php'; //correct path to Mage.php
$app = Mage::app();
Mage::register('isSecureArea', true);

Integrating GeoDjango into existing Django project

I have a Django project with multiple apps. They all share a db with engine = django.db.backends.postgresql_psycopg2. Now I want some functionality of GeoDjango and decided I want to integrate it into my existing project. I read through the tutorial, and it looks like I have to create a separate spartial database for GeoDjango. I wonder if there is anyway around. I tried to add this into one of my apps' models.py without changing my db settings :
from django.contrib.gis.db.models import PointField
class Location(models.Model):
location = PointField()
But when I run syncdb, I got this error.
File "/home/virtual/virtual-env/lib/python2.7/site-packages/django/contrib/gis/db/models/fields.py", line 200, in db_type
return connection.ops.geo_db_type(self)
Actually, as i recall, django.contrib.gis.db.backends.postgis is extension of postgresql_psycopg2 so you could change db driver in settings, create new db with spatial template and then migrate data to new db (South is great for this). By itself geodjango is highly dependent on DB inner methods thus, unfortunately, you couldn't use it with regular db.
Other way - you could make use of django's multi-db ability, and create extra db for geodjango models.
Your error looks like it comes from not changing the database extension in your settings file. You don't technically need to create a new database using the spatial template, you can simply run the PostGIS scripts on your existing database to get all of the geospatial goodies. As always, you should backup your existing database before doing this though.
I'm not 100%, but I think that you can pipe postgis.sql and spatial_ref_sys.sql into your existing database, grant permissions to the tables, and change the db setting to "django.contrib.gis.db.backends.postgis". (After you have installed the deps of course)
https://docs.djangoproject.com/en/dev/ref/contrib/gis/install/#spatialdb-template
I'd be interested to see what you find. Be careful, postgis installation can build some character but you don't want it to build too much.
From the docs (django 3.1) https://docs.djangoproject.com/en/3.1/ref/databases/#migration-operation-for-adding-extensions :
If you need to add a PostgreSQL extension (like hstore, postgis, etc.) using a migration, use the CreateExtension operation.

Resources