I'd like to respect the database as a "read-only" and never write to it. Is there a way to easily prevent syncdb from even bothering to check to update the database?
With Django 1.2 and the ability to have multiple databases, it'd like to be able to query a database for information. I'd never need to actually write to that database.
However, I'd be scared if syncdb ran and attempted to update that database (because I may not have a technically read-only account to that database). Mainly, I'd just like to use/abuse the Django ORM as a way to query that database.
UPDATE: Sorry, I need to be able to sync one of the databases in settings.py, just not this specific one.
Heh, I guess I'll answer my own question (RTFM!)...
http://docs.djangoproject.com/en/dev/topics/db/multi-db/#an-example
def allow_syncdb(self, db, model):
...
That's a definite start...
If you don't need syncdb, don't run it, simple as that. Updating the database is what it does, so if you don't need that, you shouldn't run it - it doesn't do anything else.
However if you're actually asking how to prevent syncdb from running at all, one possibility would be to define a 'dummy' syncdb command inside one of your apps. Follow the custom management command instructions but just put pass inside the command's handle method. Django will always find your version of the command first, making it a no-op.
This issue came up for me when working with read-only mirrors of Microsof SQL Server databases (uhhg). Since you can't selectively run syncdb on a single app or database. But you have to run syncdb when you first create a new Django project or install an new app that requires it (like south). What I did was to put my read-only database in its own Django app and then add an empty South migration to that app. That way syncdb thinks south is handling db setup for those apps, and south doesn't do anything to them!
manage.py schemamigration ap_with_read_only_database --empty initial_empty_migration_that_does_nothing
That leaves you free to manage the schema of that db outside of django.
Related
I have a flask app that is deployed on Google's App Engine. I have noticed a minor bug and I would like to fix it but my database is already populated.
How can I make this minor code change and push / deploy back to my app without losing all my data? (which is probably a basic question but I'm not finding much. all tutorials online are focused on creating the app and deploy, not updating)
Thus far, I have been dropping and re-creating the tables whenever I redeploy, mostly out of ignorance. Here are the steps I have followed
1). make the change in my app
commit and push changes to bitbucket source code
in Google Cloud SDK: git pull
Google Cloud SDK: gcloud app deploy
These steps result in an empty database because the directory I am pushing from my local computer has an empty database. Is this where I should be using git merge?
Is this a database "migration" or is this a "git merge"? I'm not sure what the right terms are to use to research this further. Thanks.
There are a couple of angles to your question. I'm going to try to give you some information, but let me warn you, this isn't going to be a trivial change to your workflow, you'll have to change some things.
First of all, based on the way you worded your question I get the idea that you commit your database to git along with your code. If I got this right, then this is something that you need to stop doing. The database is not code, so it should not be committed to source control.
You should have a completely independent database on each installation of your application. For example, you will have a database on your own machine to do development. You will also need another database in your gcloud deployment. You may need more databases if you have other uses for your application. A very common third database for many people is one that is used for automated tests, which could also be located in your local development machine, but is not the same database that you use for day to day development.
To make changes to your database schema you will not drop and recreate tables anymore, that is clearly something that you already realized that needs an improvement. A good approach to make these changes is to use a database migration framework. These tools allow you to generate short scripts that make these changes to the database in a more focused way, without destroying and recreating everything, and for that reason, the data is in general not lost. For Flask-SQLAlchemy, the best option for database migrations is Flask-Migrate, which is a lightweight wrapper around the Alembic migration framework. (I might be biased here as I'm the author of the Flask-Migrate extension!).
Documentation for Flask-Migrate: https://flask-migrate.readthedocs.io/en/latest/.
I'm working on an OpenCart project. (Note: this is my first time dealing with it.)
I want to somehow implement my usual workflow of:
working on localhost, experimenting, etc,
deploying the changes to the production server (sometimes to a staging server before that),
adding the database changes.
Now, how should I achieve this?
What I already did with GIT is I created an automated deployment flow, which consists of the following:
building a deployment version (Checking out master/HEAD's upload/ directory, and removing the upload/install directory.),
copy the upload/ dir's contents to the target server.
This work fine, but won't solve the database migration issue.
I think it's not even as simple as updating certain tables in the target server's database from my local database, since for example: the "settings" table contains data that's specific to the environment.
So I can't just overwrite the settings table with my local version.
It seems to me, that the easiest - and ugliest - solution would be to develop on the prod server in parallel to the localhost changes. So for example: If I install a module, which causes changes in the database, then I would need to replicate every step I took in the local environment installing and placing that module. Same goes for every admin setup I take. (Meta changes, etc.)
This sounds awfully painful to me, so I hope there's a better solution out there other than doing every database-related change twice...
Thanks in advance!
Background:
I am using GitHub to store a ZF2 application.
The database schema + the actual data stored inside the schema are not being stored inside a version control. At the moment I am in development mode, so I have some database dump scripts that I load into the database when I need to. I also tweak entries in the database via phpMyAdmin when I need ongoing granular control for immediate testing purposes. I am also looking into using Doctrire ORM, so my schema will be part of my code via Annotations, and that will be checked into GitHub. Doctrine ORM will generate the actual schema for me, although it is still a separate step in the deployment process. The actual data however, will still be outside of the application and outside of the repository and currently has to be dealt with separately and is not automated.
Goal:
I want to be able to deploy ZF2 application and the database schema, and the data onto Zend Server and have it "just work" in the most automated, least manual way possible.
Question:
What is a recommended, best practice way to deploy every aspect of ZF2 application in the most automated, least manual way possible and have it "just work"? Let's focus on the Development and Testing mode here, as in Production it may be good to have separate deployment steps to protect against accidental live data overwrites.
You can try Phing (http://www.phing.info/) for deploying your PHP application, adjusting directory permissions, running database migrations, running unit tests, etc. I used Phing in couple of my projects with great success.
I have been working on developing the models for a Django app. I am working on it in my spare time. I have an issue when it comes to testing. Whenever I realize I have a mistake in my models, I have to go through the trivial but annoying process of dropping the corresponding database and then recreating it and running python manage.py syncdb. Apparently, this is because Django's syncdb cannot change the database schema. I do not fully understand what that means. Although, I know it is a pain to have to keep deleting and recreating my database every time I try and change something in my models.
Is there a better way of doing this? Is something wrong with my install? I feel like this is such a simple thing to fix that I must be doing something wrong.
No, nothing is wrong with your installation. It is a limitation of django.
There is a 3rd party app called django-south, which can be used for managing "migrations" (changes to database models). It is a very widely used app for managing database changes.
There are lots of documentation online to help you understand how south works, and how to use it.
This is a good tutorial on south
I've just started using Heroku with Django and it seems great. However, when I change my existing models I'm not sure how to run those changes to the Heroku environment. The syncdb works just fine when adding all new database tables, but how should I modify existing tables?
I found out that Heroku provides psql access only to a dedicated database so that's out of the question. I haven't tried South but it seems like a solution.
So I guess I'm asking how to make database changes with Django and Heroku?
What you are asking for is called "schema migration" or even "schema evolution". Django has some documentation about it on the wiki.
Django's syncdb command does not support that. As a matter of fact, the documentation for syncdb is clear:
Creates the database tables for all apps in INSTALLED_APPS whose
tables have not already been created
Rather, django proposes to use drop the tables manually and then to run syncdb again in the documentation of the deprecated reset command:
You can also use ALTER TABLE or DROP TABLE statements manually.
But fear not, there are many reusable apps to help you with proper schema migrations and hopefully you can pick the one that suits you best. Rather than elaborate in my answer, please let me link an article I wrote about Django schema migration which compares all current solutions.
South works great on Heroku.