I'm using Django in a development process. It is annoying that every time I change a bit in a model I need to delete database and run syncdb. For the purpose of testing, I want to add some initial data into database automatically every time when I run syncdb. I've tried put these sort of code inside one app's __init__.py, but it would run before database created and it's a bit annoying to deal with exceptions. Isn't there a neater way to do this?
Once you have initially populated the database; use the dumpdata command to create a fixture (a copy of data). Save it to a file. Then use the loaddata command to automatically populate the database.
Suppose you have an app called bookstore for which you want to automatically load a series of books, authors, etc.
Once you have added some records in the database:
python django-admin.py dumpdata bookstore > initial.json
Once you have made some changes or want to recreate the database:
python django-admin.py loaddata initial.json
South is nice, but it is overkill for this purpose.
If you need to make changes in database and you want that your data remains safe then SOUTH is the right software for you.
It manages change in database and its data. You wont need to delete database when you make changes if you have South. Also it has backups of previous states of database so that if you want to revert back you can. I recommend you read its documentation. It will surely help.
Related
I have my migration files, which creates the initial schema for the database, and, I have my seed files, which populates the initial schema, with random data, and set types.
My understanding of the migrations and seeds is that, whenever a new team member joins, he can just run them and be up to speed with all db changes and the require data for the product to work, plus, you can apply changes to stg and prod by running migration files.
But, as my project has advanced, new types of data has been emerging and the only way to apply them was to create a migration that will run inserts into the database.
The problem that I have is that the way artisan works with migrations and seeds is that it first runs all migrations, and then, it runs all the seeds, and there does not seems to be a way for me to specify the order in which they should be run. So, if I run a migrate:refresh --seed I get into errors because it applies the latest migrations, which inserts new data (that might or might not, depend on types inserted in the seeds) before the seed has inserted his own data.
One solution we tried is to update the seeds as well, and check before applying changes in migrations that inserts data, but this has become very troublesome to maintain.
What is the expected use of the migrations and seeds for this scenario?
Update
To try to make it more clear:
Lets say I have a migration to create users:
Users:{id, name, type}
And I have my seed to create users.
I run both, and I have a users table with a bunch of users.
Time goes by, and we decide that we need a table for user_types.
A migration is created, that will create the new table, and populate the data of the new user types and updates to match the current user.type to user.type_id.
Devs runs the migrations and they have theyre db all up to date.
A new dev joins the team. He runs the migrations. And then the seeds.
It breaks.
Now, if we update the seeds to match the latest, we will run into duplicated data for the user_types table. To avoid this we will need to have some sort of defensive code in the migration to not run things if there is no data, and update if there it is.
The question is, is this the right way of using the migrations? How do you push a data change to all devs, withouth having to re run the seeds?
I think migrations should not depend on any data in seeds. And seeds should expect the latest migration schema. So, whenever your schema changes, you should update seeds accordingly and do a migrate:fresh --seed. At least, that's what I'm doing.
As you are already doing that, I wonder what makes it a "troublesome process" for you. Can you elaborate?
From your explanation, I think I understand.
The first thing your seeders should do is delete everything so you always start with an empty database, at least where the seeders should be concerned.
I'd also take a closer look at the docs https://laravel.com/docs/5.2/seeding specifically the section "Using Model Factories".
public function run()
{
factory(App\User::class, 50)->create()->each(function($u) {
$u->posts()->save(factory(App\Post::class)->make());
});
}
In this case, you'd do something similar except you'd start at the top which for you is user_types, then for each type it creates, it would generate however many users you need. This seems a bit different from your current process where it sounds like you have a seeder file for each table.
This way, you have the parent/child relationships handled and made very clear in the code and it's very easy to add additional items in the future. Additionally, because you are essentially starting from scratch each time the seeders run, you can be sure this will work for new and existing devs.
I have an EPIServer CMS. I have a staging instance and a production instance. I want to be able to edit properties/texts in the staging instance, and then in one operation migrate all the new values to production. What is the easiest way to do this?
I suppose I should do something like programatically enumerate all changed properties since a given timestamp and then save key/values to a file, and then update in production from the file. Os is there a better way built-in to achieve the same?
Not built in. If your stage db is a copy of production when you start you can export the pages from stage and include page types and then import them to production, but they will get new ids and you'd have to delete the originals. You would also lose all updates made to production during development. I think you're better of writing that xml exporter/importer.
Looks like Episerver Mirroring will solve your problem. You can use mirroring to move content from staging to prod with the help of scheduled job or by running the job manually.
Just wondering the best way to handle the following....
I want to have a VS2010 database project to keep the schema of my database in the dev, integration test and production environments in sync.
As part of the test and production environments I have a lot of reference data that needs to be loaded into the database.
For dev and test I can just recreate the database and use Post Deployment scripts to load the data. However, I cant really do this for the production environment as obviously it will have live data on it.
So what is the best solution to do this? I dont think I can use Post Deployment scripts to load the datbase, because in the case of an insert statement I would need to wrap each one inside an IF NOT EXISTS... clause and there are 1000's of rows.
Maybe its best to use the VS2010 + MSBuild tools to keep the schema up to date and then have a seperate solution for managing the data?
Or is there a solution to this that uses purely the tools in VS2010 + MSBuild?
The best solution for live production enviroment - not to use automatic updates at all!
Use very well tested hand made update scripts in touch with your backend and frontend applications update
And there is always a good idea to have a fresh backup
How about truncating and rebuilding the reference data table each time? If there are constraints you can remove them and add them back at the end of the post-deployment script. Would that work for you?
Or is there a reason why you can't remove production reference data?
For reference data you can have a script that handles an insert, update or delete depending if the data is already in the table or not.
Check out this link for more details (this also includes a generator to help you generate your scripts).
Use a populated database to generate merge statements that can be applied in Post-Deployment.
It might be a good idea to take out the DELETE clause though.
We are currently reviewing how we store our database scripts (tables, procs, functions, views, data fixes) in subversion and I was wondering if there is any consensus as to what is the best approach?
Some of the factors we'd need to consider include:
Should we checkin 'Create' scripts or checkin incremental changes with 'Alter' scripts
How do we keep track of the state of the database for a given release
It should be easy to build a database from scratch for any given release version
Should a table exist in the database listing the scripts that have run against it, or the version of the database etc.
Obviously it's a pretty open ended question, so I'm keen to hear what people's experience has taught them.
After a few iterations, the approach we took was roughly like this:
One file per table and per stored procedure. Also separate files for other things like setting up database users, populating look-up tables with their data.
The file for a table starts with the CREATE command and a succession of ALTER commands added as the schema evolves. Each of these commands is bracketed in tests for whether the table or column already exists. This means each script can be run in an up-to-date database and won't change anything. It also means that for any old database, the script updates it to the latest schema. And for an empty database the CREATE script creates the table and the ALTER scripts are all skipped.
We also have a program (written in Python) that scans the directory full of scripts and assembles them in to one big script. It parses the SQL just enough to deduce dependencies between tables (based on foreign-key references) and order them appropriately. The result is a monster SQL script that gets the database up to spec in one go. The script-assembling program also calculates the MD5 hash of the input files, and uses that to update a version number that is written in to a special table in the last script in the list.
Barring accidents, the result is that the database script for a give version of the source code creates the schema this code was designed to interoperate with. It also means that there is a single (somewhat large) SQL script to give to the customer to build new databases or update existing ones. (This was important in this case because there would be many instances of the database, one for each of their customers.)
There is an interesting article at this link:
https://blog.codinghorror.com/get-your-database-under-version-control/
It advocates a baseline 'create' script followed by checking in 'alter' scripts and keeping a version table in the database.
The upgrade script option
Store each change in the database as a separate sql script. Store each group of changes in a numbered folder. Use a script to apply changes a folder at a time and record in the database which folders have been applied.
Pros:
Fully automated, testable upgrade path
Cons:
Hard to see full history of each individual element
Have to build a new database from scratch, going through all the versions
I tend to check in the initial create script. I then have a DbVersion table in my database and my code uses that to upgrade the database on initial connection if necessary. For example, if my database is at version 1 and my code is at version 3, my code will apply the ALTER statements to bring it to version 2, then to version 3. I use a simple fallthrough switch statement for this.
This has the advantage that when you deploy a new version of your application, it will automatically upgrade old databases and you never have to worry about the database being out of sync with the software. It also maintains a very visible change history.
This isn't a good idea for all software, but variations can be applied.
You could get some hints by reading how this is done with Ruby On Rails' migrations.
The best way to understand this is probably to just try it out yourself, and then inspecting the database manually.
Answers to each of your factors:
Store CREATE scripts. If you want to checkout version x.y.z then it'd be nice to simply run your create script to setup the database immediately. You could add ALTER scripts as well to go from the previous version to the next (e.g., you commit version 3 which contains a version 3 CREATE script and a version 2 → 3 alter script).
See the Rails migration solution. Basically they keep the table version number in the database, so you always know.
Use CREATE scripts.
Using version numbers would probably be the most generic solution — script names and paths can change over time.
My two cents!
We create a branch in Subversion and all of the database changes for the next release are scripted out and checked in. All scripts are repeatable so you can run them multiple times without error.
We also link the change scripts to issue items or bug ids so we can hold back a change set if needed. We then have an automated build process that looks at the issue items we are releasing and pulls the change scripts from Subversion and creates a single SQL script file with all of the changes sorted appropriately.
This single file is then used to promote the changes to the Test, QA and Production environments. The automated build process also creates database entries documenting the version (branch plus build id.) We think this is the best approach with enterprise developers. More details on how we do this can be found HERE
The create script option:
Use create scripts that will build you the latest version of the database from scratch, which is empty except the default lookup data.
Use standard version control techniques to store,branch,tag versions and view histories of your objects.
When upgrading a live database (where you don't want to loose data), create a blank second copy of the database at the new version and use a tool like red-gate's link text
Pros:
Changes to files are tracked in a standard source-code like manner
Cons:
Reliance on manual use of a 3rd party tool to do actual upgrades (no/little automation)
Our company checks them in simply because someone decided to put it in some SOX document that we do. It makes no sense to me at all, except possible as a reference document. I can't see a time we'd pull them out and try and use them again, and if we did we'd have to know which one ran first and which one to run after which. Backing up the database is much more important then keeping the Alter scripts.
for every release we need to give one update.sql file which contains all the new table scripts, alter statements, new/modified packages,roles,etc. This file is used to upgrade the database from 1 version to 2.
What ever we include in update.sql file above one all this statements need to go to individual respective files. like alter statement has to go to table as a new column (table script has to be modifed not Alter statement is added after create table script in the file) in the same way new tables, roles etc.
So whenever if user wants to upgrade he will use the first update.sql file to upgrade.
If he want to build from scrach then he will use the build.sql which already having all the above statements, it makes the database in sync.
sriRamulu
Sriramis4u#yahoo.com
In my case, I build a SH script for this work: https://github.com/reduardo7/db-version-updater
How is an open question
In my case I am trying to create something simple that is easy to use for developers and I do it under the following scheme
Things I tested:
File-based script handling in git using GitlabCI
It does not work, collisions are created and the Administration part has to be done by hand in case of disaster and the development part is too complicated
Use of permissions and access via mysql clients
There is no traceability on changes to the database and the transition to production is manual
Use of programs mentioned here
They require uploading the structures and many adaptations and usually you end up with change control just like the word
Repository usage
Could not control the DRP part
I could not properly control the backups
I don't think it is a good idea to have the backups on the same server and you generate high lasgs for the process
This was what worked best
Manage permissions per user and generate traceability of everything that is sent to the database
Multi platform
Use of development-Production-QA database
Always support before each modification
Manage an open repository for change control
Multi-server
Deactivate / Activate access to the web page or App through Endpoints
the initial project is in:
In case the comment manager reads this part, I understand the self-promotion but please just remove this part and leave the rest since I think it complies with the answer to the question reacted in the post ...
https://hub.docker.com/r/arelis/gitdb
I hope this reaches you since I see that several
There is an interesting article with new URL at: https://blog.codinghorror.com/get-your-database-under-version-control/
It a bit old but the concepts are still there. Good Read!
I'm considering using Django for a project I'm starting (fyi, a browser-based game) and one of the features I'm liking the most is using syncdb to automatically create the database tables based on the Django models I define (a feature that I can't seem to find in any other framework).
I was already thinking this was too good to be true when I saw this in the documentation:
Syncdb will not alter existing tables
syncdb will only create tables for models which have not yet been installed. It will never issue ALTER TABLE statements to match changes made to a model class after installation. Changes to model classes and database schemas often involve some form of ambiguity and, in those cases, Django would have to guess at the correct changes to make. There is a risk that critical data would be lost in the process.
If you have made changes to a model and wish to alter the database tables to match, use the sql command to display the new SQL structure and compare that to your existing table schema to work out the changes.
It seems that altering existing tables will have to be done "by hand".
What I would like to know is the best way to do this. Two solutions come to mind:
As the documentation suggests, make the changes manually in the DB;
Do a backup of the database, wipe it, create the tables again (with syncdb, since now it's creating the tables from scratch) and import the backed-up data (this might take too long if the database is big)
Any ideas?
Manually doing the SQL changes and dump/reload are both options, but you may also want to check out some of the schema-evolution packages for Django. The most mature options are django-evolution and South.
EDIT: And hey, here comes dmigrations.
UPDATE: Since this answer was originally written, django-evolution and dmigrations have both ceased active development and South has become the de-facto standard for schema migration in Django. Parts of South may even be integrated into Django within the next release or two.
UPDATE: A schema-migrations framework based on South (and authored by Andrew Godwin, author of South) is included in Django 1.7+.
As noted in other answers to the same topic, be sure to watch the DjangoCon 2008 Schema Evolution Panel on YouTube.
Also, two new projects on the map: Simplemigrations and Migratory.
One good way to do this is via fixtures, particularly the initial_data fixtures.
A fixture is a collection of files that contain the serialized contents of the database. So it's like having a backup of the database but as it's something Django is aware of it's easier to use and will have additional benefits when you come to do things like unit testing.
You can create a fixture from the data currently in your DB using django-admin.py dumpdata. By default the data is in JSON format, but other options such as XML are available. A good place to store fixtures is a fixtures sub-directory of your application directories.
You can load a fixure using django-admin.py loaddata but more significantly, if your fixture has a name like initial_data.json it will be automatically loaded when you do a syncdb, saving the trouble of importing it yourself.
Another benefit is that when you run manage.py test to run your Unit Tests the temporary test database will also have the Initial Data Fixture loaded.
Of course, this will work when when you're adding attributes to models and columns to the DB. If you drop a column from the Database you'll need to update your fixture to remove the data for that column which might not be straightforward.
This works best when doing lots of little database changes during development. For updating production DBs a manually generated SQL script can often work best.
I've been using django-evolution. Caveats include:
Its automatic suggestions have been uniformly rotten; and
Its fingerprint function returns different values for the same database on different platforms.
That said, I find the custom schema_evolution.py approach handy. To work around the fingerprint problem, I suggest code like:
BEFORE = 'fv1:-436177719' # first fingerprint
BEFORE64 = 'fv1:-108578349625146375' # same, but on 64-bit Linux
AFTER = 'fv1:-2132605944'
AFTER64 = 'fv1:-3559032165562222486'
fingerprints = [
BEFORE, AFTER,
BEFORE64, AFTER64,
]
CHANGESQL = """
/* put your SQL code to make the changes here */
"""
evolutions = [
((BEFORE, AFTER), CHANGESQL),
((BEFORE64, AFTER64), CHANGESQL)
]
If I had more fingerprints and changes, I'd re-factor it. Until then, making it cleaner would be stealing development time from something else.
EDIT: Given that I'm manually constructing my changes anyway, I'll try dmigrations next time.
django-command-extensions is a django library that gives some extra commands to manage.py. One of them is sqldiff, which should give you the sql needed to update to your new model. It is, however, listed as 'very experimental'.
So far in my company we have used the manual approach. What works best for you depends very much on your development style.
We generally have not so many schema changes in production systems and somewhat formalized rollouts from development to production servers. Whenever we roll out (10-20 times a year) we do a fill diff of the current and the upcoming production branch reviewing all the code and noting what has to be changed on the production server. The required changes might be additional dependencies, changes to the settings file and changes to the database.
This works very well for us. Having it all automated is a niche vision but to difficult for us - maybe we could manage migrations but we still would need to handle additional library, server, whatever dependencies.
Django 1.7 (currently in development) is adding native support for schema migration with manage.py migrate and manage.py makemigrations (migrate deprecates syncdb).