Integrating GeoDjango into existing Django project - database

I have a Django project with multiple apps. They all share a db with engine = django.db.backends.postgresql_psycopg2. Now I want some functionality of GeoDjango and decided I want to integrate it into my existing project. I read through the tutorial, and it looks like I have to create a separate spartial database for GeoDjango. I wonder if there is anyway around. I tried to add this into one of my apps' models.py without changing my db settings :
from django.contrib.gis.db.models import PointField
class Location(models.Model):
location = PointField()
But when I run syncdb, I got this error.
File "/home/virtual/virtual-env/lib/python2.7/site-packages/django/contrib/gis/db/models/fields.py", line 200, in db_type
return connection.ops.geo_db_type(self)

Actually, as i recall, django.contrib.gis.db.backends.postgis is extension of postgresql_psycopg2 so you could change db driver in settings, create new db with spatial template and then migrate data to new db (South is great for this). By itself geodjango is highly dependent on DB inner methods thus, unfortunately, you couldn't use it with regular db.
Other way - you could make use of django's multi-db ability, and create extra db for geodjango models.

Your error looks like it comes from not changing the database extension in your settings file. You don't technically need to create a new database using the spatial template, you can simply run the PostGIS scripts on your existing database to get all of the geospatial goodies. As always, you should backup your existing database before doing this though.

I'm not 100%, but I think that you can pipe postgis.sql and spatial_ref_sys.sql into your existing database, grant permissions to the tables, and change the db setting to "django.contrib.gis.db.backends.postgis". (After you have installed the deps of course)
https://docs.djangoproject.com/en/dev/ref/contrib/gis/install/#spatialdb-template
I'd be interested to see what you find. Be careful, postgis installation can build some character but you don't want it to build too much.

From the docs (django 3.1) https://docs.djangoproject.com/en/3.1/ref/databases/#migration-operation-for-adding-extensions :
If you need to add a PostgreSQL extension (like hstore, postgis, etc.) using a migration, use the CreateExtension operation.

Related

Create tabels in Hibernate auto or manually?

Im currently developing a servlet homepage (spring + hibernate + mysql).
Im at the moment using the Hibernate property hibernate.hbm2ddl.auto set to update.
This is working fine and Hibernate creates and updates my tables.
However, Ive have read on multiple places that this is not recommended in production and that it is unsafe.
But if I dont put this option my tables is not created, and I really don't want to create my tabels manually on the server. I got limited time working on this alone.
How is this usually done? It's seems like it is quite much work to add all tables manually imo.
In production, you typically have already existing tables with a large amount of data that you don't want to lose, and that you want to migrate to the new schema. Hibernate can't do that automagically for you. It doesn't know that the data that was previously in column A must now be in the new column B.
So you'll need to create a migration script. Of course, you can use Hibernate to generate the new schema for you in development, see what the differences with the old schema are, and create your script thanks to that. But yes, having an app in production and migrate it needs some work to be done.

How to read data from SQLite database on Eclipse

I'm using a static database that I created with SQLite Database Browser. I put it in my assets folder and built a code to copy the database to a database variable (Does that make sense?) so I could read information from it. Problem is I don't know how - mostly the SQL queries involved - and what are your suggested methods do to that? In other words, what methods should I add to my Database Handler class (Or data adapter?) in order to present the data in a list view, for example.
Thank you for all your help.
Read the Android database documentation.
Copying your database from the assets folder is typically done in the onCreate and/or onUpgrade functions of your SQLiteOpenHelper-derived class.
This tutorial covers the basics:
Using the SQLite Database with ListView
As for naming things: use whatever names make sense in your application.

Database migration from new YAML in Doctrine

I have to add a new column and a new table to my database but i dont have access to shell of my server. I changed my YAML file. How can i tell doctrine to "migrate models and database to changed yaml"?
Doctrine_Core class has a lot of static methods like generateMigrationsFromDiff(), which you can use in case you dont't have access to cli tasks (see full api at http://www.doctrine-project.org/Doctrine_Core/1_2). I am not sure that it's exactly what you need, so don't forget to make backup :)

Best strategy to initially populate a Grails database backend

I'd like to know your approach/experiences when it's time to initially populate the Grails DB that will hold your app data. Assuming you have CSVs with data, is is "safer" to create a script (with whatever tool fits you) that:
1.-Generates the Bootstrap commands with the domain classes, run it in test or dev environment and then use the native db commands to export it to prod?
2.-Create the DB's insert script assuming GORM's version = 0 and incrementing manually the soon-to-be autogenerated IDs ?
My fear is that the second approach may lead to inconsistencies for hibernate will have the responsability for the IDs generation and there may be something else I'm missing.
Thanks in advance.
Take a look at this link. This allows you to run groovy scripts in the normal grails context giving you access to all grails features including GORM. I'm currently importing data from a legacy database and have found that writing a Groovy script using the Groovy SQL interface to pull out the data then putting that data in domain objects appears to be the easiest thing to do. Once you have the data imported you just use the commands specific to your database system to move that data to the production database.
Update:
Apparently the updated entry referenced from the blog entry I link to no longer exists. I was able to get this working using code at the following link which is also referenced in the comments.
http://pastie.org/180868
Finally it seems that the simplest solution is to consider that GORM as of the current release (1.2) uses a single sequence for all auto-generated ids. So considering this when creating whatever scripts you need (in the language of your preference) should suffice. I understand it's planned for 1.3 release that every table has its own sequence.

Altering database tables in Django

I'm considering using Django for a project I'm starting (fyi, a browser-based game) and one of the features I'm liking the most is using syncdb to automatically create the database tables based on the Django models I define (a feature that I can't seem to find in any other framework).
I was already thinking this was too good to be true when I saw this in the documentation:
Syncdb will not alter existing tables
syncdb will only create tables for models which have not yet been installed. It will never issue ALTER TABLE statements to match changes made to a model class after installation. Changes to model classes and database schemas often involve some form of ambiguity and, in those cases, Django would have to guess at the correct changes to make. There is a risk that critical data would be lost in the process.
If you have made changes to a model and wish to alter the database tables to match, use the sql command to display the new SQL structure and compare that to your existing table schema to work out the changes.
It seems that altering existing tables will have to be done "by hand".
What I would like to know is the best way to do this. Two solutions come to mind:
As the documentation suggests, make the changes manually in the DB;
Do a backup of the database, wipe it, create the tables again (with syncdb, since now it's creating the tables from scratch) and import the backed-up data (this might take too long if the database is big)
Any ideas?
Manually doing the SQL changes and dump/reload are both options, but you may also want to check out some of the schema-evolution packages for Django. The most mature options are django-evolution and South.
EDIT: And hey, here comes dmigrations.
UPDATE: Since this answer was originally written, django-evolution and dmigrations have both ceased active development and South has become the de-facto standard for schema migration in Django. Parts of South may even be integrated into Django within the next release or two.
UPDATE: A schema-migrations framework based on South (and authored by Andrew Godwin, author of South) is included in Django 1.7+.
As noted in other answers to the same topic, be sure to watch the DjangoCon 2008 Schema Evolution Panel on YouTube.
Also, two new projects on the map: Simplemigrations and Migratory.
One good way to do this is via fixtures, particularly the initial_data fixtures.
A fixture is a collection of files that contain the serialized contents of the database. So it's like having a backup of the database but as it's something Django is aware of it's easier to use and will have additional benefits when you come to do things like unit testing.
You can create a fixture from the data currently in your DB using django-admin.py dumpdata. By default the data is in JSON format, but other options such as XML are available. A good place to store fixtures is a fixtures sub-directory of your application directories.
You can load a fixure using django-admin.py loaddata but more significantly, if your fixture has a name like initial_data.json it will be automatically loaded when you do a syncdb, saving the trouble of importing it yourself.
Another benefit is that when you run manage.py test to run your Unit Tests the temporary test database will also have the Initial Data Fixture loaded.
Of course, this will work when when you're adding attributes to models and columns to the DB. If you drop a column from the Database you'll need to update your fixture to remove the data for that column which might not be straightforward.
This works best when doing lots of little database changes during development. For updating production DBs a manually generated SQL script can often work best.
I've been using django-evolution. Caveats include:
Its automatic suggestions have been uniformly rotten; and
Its fingerprint function returns different values for the same database on different platforms.
That said, I find the custom schema_evolution.py approach handy. To work around the fingerprint problem, I suggest code like:
BEFORE = 'fv1:-436177719' # first fingerprint
BEFORE64 = 'fv1:-108578349625146375' # same, but on 64-bit Linux
AFTER = 'fv1:-2132605944'
AFTER64 = 'fv1:-3559032165562222486'
fingerprints = [
BEFORE, AFTER,
BEFORE64, AFTER64,
]
CHANGESQL = """
/* put your SQL code to make the changes here */
"""
evolutions = [
((BEFORE, AFTER), CHANGESQL),
((BEFORE64, AFTER64), CHANGESQL)
]
If I had more fingerprints and changes, I'd re-factor it. Until then, making it cleaner would be stealing development time from something else.
EDIT: Given that I'm manually constructing my changes anyway, I'll try dmigrations next time.
django-command-extensions is a django library that gives some extra commands to manage.py. One of them is sqldiff, which should give you the sql needed to update to your new model. It is, however, listed as 'very experimental'.
So far in my company we have used the manual approach. What works best for you depends very much on your development style.
We generally have not so many schema changes in production systems and somewhat formalized rollouts from development to production servers. Whenever we roll out (10-20 times a year) we do a fill diff of the current and the upcoming production branch reviewing all the code and noting what has to be changed on the production server. The required changes might be additional dependencies, changes to the settings file and changes to the database.
This works very well for us. Having it all automated is a niche vision but to difficult for us - maybe we could manage migrations but we still would need to handle additional library, server, whatever dependencies.
Django 1.7 (currently in development) is adding native support for schema migration with manage.py migrate and manage.py makemigrations (migrate deprecates syncdb).

Resources