Sqlite3 Adding Columns - database

I'm working with Django and I added a new model variable meaning that I need another column in my sqlite3 data base.
I have heard that I'm supposed to use sqlite> , but I am really confused when I start to use it. So, if that is part of the solution, can you be very specific on what to do?
thanks
MORE INFO:
my app is called "livestream" & and my class is "Stream"
I added the model "channel"
returns ---->
DatabaseError: table livestream_stream has no column named channel

You can ALTER TABLE to add a new column in Sqlite3 but not rename it nor drop it. Sqlite3 is a very useful database for bootstrapping your app. But sooner or later, you will need to change to a more robust/flexible database engine, say MySql or Postgresql.
Every time you add a new column to your models using Sqlite, you will need to recreate the schema (as far as I know, when you do migrations with Sqlite to add new columns, south complaints. see below). An approach I like more is use MySql with Django-South from the beginning, where I'm not sure about every aspect of my database.
Django South is an app for doing database migrations. It's very useful and the docs are a good starting point for beginners.
Every time you should make modifications to your database, you should consider them as migrations and use South.
Hope this helps!

Related

Is there any way to create migrations in beego?

I haven't found in documentation anything except "syncdb" command which create database tables from scratch. Is there any command to create and run migrations based on ORM model? Like in django? Add field, change type, etc.
No, orm.RunSyncdb(name, force, verbose) and it's command line equivalent only do a small subset of what tools like django's south can do.
Beego's orm can:
Create new tables from scratch
Drop all tables (force = true)
Add new columns as you extend your model
You need to handle dropping columns and any changes to the column parameters used to initially create the table.
Sadly beego doesn't include this feature, but no framework in go (as of today) does.
Instead they all relay that to other libraries to handle.
What you can do however is use goose for migrations:
https://bitbucket.org/liamstask/goose
or any other migration library as discussed in the following thread:
http://www.reddit.com/r/golang/comments/2dlbz5/database_migration_handling_in_go/
Remember that due to the modularity of beego you can also use any another orm (like gorm).
Feel free to look for : avelino/awesome-go in google if you want a list of tools/libs around the go ecosystem.
Yes, you can create migrations in beego now. Example, If you need to create a new table, you can start by creating a new migration file using the bee tool:
bee generate migration create_user_table
This command will create a file inside database/migrations folder. The file name contains the date, time and name of the migration.
For further details you can check this article https://ncona.com/2017/10/database-migrations-in-beego

Create tabels in Hibernate auto or manually?

Im currently developing a servlet homepage (spring + hibernate + mysql).
Im at the moment using the Hibernate property hibernate.hbm2ddl.auto set to update.
This is working fine and Hibernate creates and updates my tables.
However, Ive have read on multiple places that this is not recommended in production and that it is unsafe.
But if I dont put this option my tables is not created, and I really don't want to create my tabels manually on the server. I got limited time working on this alone.
How is this usually done? It's seems like it is quite much work to add all tables manually imo.
In production, you typically have already existing tables with a large amount of data that you don't want to lose, and that you want to migrate to the new schema. Hibernate can't do that automagically for you. It doesn't know that the data that was previously in column A must now be in the new column B.
So you'll need to create a migration script. Of course, you can use Hibernate to generate the new schema for you in development, see what the differences with the old schema are, and create your script thanks to that. But yes, having an app in production and migrate it needs some work to be done.

Move Django model from one app to another [duplicate]

This question already has answers here:
How do I migrate a model out of one django app and into a new one?
(7 answers)
Closed 9 years ago.
I made the stupid mistake of creating too many models in the same Django app, now I want to split it into 3 distinct ones. Problem is: there's already data in production in two customers' sites, so I need to carefully plan any schema/data migration to be done (I'm using django-south). I'm unsure on how to proceed, any advice would be greatly appreciated.
(I'm using PostgreSQL on a Ubuntu server 12.4 LTS, if that's of any relevance)
I thought about using db.rename_table, but can't figure out how to correctly update the foreign keys to those models (old to new) - irrelevant at the database level (since the table renaming already got that covered), but not so at the ORM level.
Update: after thinking about it, and after asking this question on programmmers.SE, I decided to keep things simple and don't worry about migrations between major versions of the product. Short term, I'll just use db.rename_table to match the new name, while also using db_table as Daniel Roseman suggested, all the while keeping the models in the old app. When upgrading to a major version, I swith to the new app and ditch all migrations altogether (so fresh installs of the new version will create the database "as-is" instead of going through all historical migrations).
I don't see why you need any data migration at all.
Just move the models to the new app, and add a db_table setting in the inner Meta classes to point to the old table names.
I did something similar on a smaller scale recently and this was my process:
Create new app and corresponding models
Update views to use new models
Update unit/systems tests to make sure nothing broke (important!)
Write a management command that populates the new models based on the old models
Deploy code
Run migration for new models
Run management command script to update new models
Leave old app for 1-2 weeks and when you think its all good, drop them.
The reasons why I didnt use data migration:
Not familiar -- felt the task was too important to use a process I wasn't familiar with
More comfortable moving data with python code then with South magic
Ran into south migration issues with dependencies. Didn't want to further complicate the migrations with a data migration. This could will be a mis-founded assumption due to my unfamiliarity with the mechanics of a data migration
Perhaps as a bias from point 3, I convinced myself using South purely as a schema management tool is the 'right' way to do. Creation/updating of data should be done in the Django layer using either fixtures or custom management commands
The simplest solution I could think of:
Create a SchemaMigration changing the type of every foreign key to models in the old app to a primitive type (including ones internal to it);
Create the new apps and their models normally;
Do a data migration from the old tables to the new ones;
Create another SchemaMigration, changing every primitive type to a foreign key again, now pointing to the new tables;
Remove the old app from settings and drop its tables.
Laborious, yes, but would do the trick. I'd hope for a better solution though.

How do you make your model's database portable in CakePHP?

I'm not very familiar with cake.. So here's my questions.. we're developing an app on mysql, but it may eventually need to deploy to mssql or oracle. How do we make sure that we won't have strange problems with our primary keys? In mysql they are AUTO INCREMENT columns but IIRC in oracle you need to use sequences... is there a way to make this a transparent change? Am I over thinking it?
Does anyone have experience with switching database vendors on a cakephp app? any pointers or things to keep an eye out for?
In the Cake database config file you choose your driver (see http://book.cakephp.org/view/40/Database-Configuration). Then, if you set your PK (which will also be your A_I column if using MySQL) with the fieldname id, Cake will automatically handle the auto_increment insertion. I would presume (NB: haven't tried Cake w/ something else) that Cake will take care of A_I columns in something like Oracle.
Cake uses its own DB abstraction layer -- but the included abstractions cover quite a bit, and it will perform as specified (i.e. it'll handle your auto increment stuff for you).
In short, you're probably overthinking. That said, I would mock up a little cake app, then try switching databases behind it (change your db config and your app should automagically switch over).
HTH,
Travis
The following practices work great for me
I use cake schemas ( I tend to set up 1 schema file for each group of models. I.E. User, Role, Profile might all be in one UsersSchema file )
Also take a look at using the debuggable.com FixturesShell - it allows you to import test case fixtures into the live database. Great for setting up that initial group of users and roles from the schema file.
Also, if you set your 'id' field to VARCHAR(36) instead of INT(#) cake will automatically use UUID style id's. This means you have a FAR FAR lower chance of your data having id value collisions if you need to move the data to another application or server.
The fixtures shell also has a command line tool for generating uuids ( so you can add them to your $records variable in the fixture for insertion etc. )
In summary - Use the CakeSchema schemas shell, the fixtures shell from debuggable.com and UUID values for your id's and it should give you a portable structure creation tool, a portable data insertion tool, and a portable id field format.
http://github.com/felixge/debuggable-scraps/tree/fd0e5ad625cb21f5ba16e6b186821a5774089ac7/cakephp/shells/fixtures
http://api.cakephp.org/class/schema-shell
You need to be using "cake schema" to manage your DB.. This will handle all the DB specific stuff when you create your database.
http://book.cakephp.org/view/735/Generating-and-using-Schema-files

Altering database tables in Django

I'm considering using Django for a project I'm starting (fyi, a browser-based game) and one of the features I'm liking the most is using syncdb to automatically create the database tables based on the Django models I define (a feature that I can't seem to find in any other framework).
I was already thinking this was too good to be true when I saw this in the documentation:
Syncdb will not alter existing tables
syncdb will only create tables for models which have not yet been installed. It will never issue ALTER TABLE statements to match changes made to a model class after installation. Changes to model classes and database schemas often involve some form of ambiguity and, in those cases, Django would have to guess at the correct changes to make. There is a risk that critical data would be lost in the process.
If you have made changes to a model and wish to alter the database tables to match, use the sql command to display the new SQL structure and compare that to your existing table schema to work out the changes.
It seems that altering existing tables will have to be done "by hand".
What I would like to know is the best way to do this. Two solutions come to mind:
As the documentation suggests, make the changes manually in the DB;
Do a backup of the database, wipe it, create the tables again (with syncdb, since now it's creating the tables from scratch) and import the backed-up data (this might take too long if the database is big)
Any ideas?
Manually doing the SQL changes and dump/reload are both options, but you may also want to check out some of the schema-evolution packages for Django. The most mature options are django-evolution and South.
EDIT: And hey, here comes dmigrations.
UPDATE: Since this answer was originally written, django-evolution and dmigrations have both ceased active development and South has become the de-facto standard for schema migration in Django. Parts of South may even be integrated into Django within the next release or two.
UPDATE: A schema-migrations framework based on South (and authored by Andrew Godwin, author of South) is included in Django 1.7+.
As noted in other answers to the same topic, be sure to watch the DjangoCon 2008 Schema Evolution Panel on YouTube.
Also, two new projects on the map: Simplemigrations and Migratory.
One good way to do this is via fixtures, particularly the initial_data fixtures.
A fixture is a collection of files that contain the serialized contents of the database. So it's like having a backup of the database but as it's something Django is aware of it's easier to use and will have additional benefits when you come to do things like unit testing.
You can create a fixture from the data currently in your DB using django-admin.py dumpdata. By default the data is in JSON format, but other options such as XML are available. A good place to store fixtures is a fixtures sub-directory of your application directories.
You can load a fixure using django-admin.py loaddata but more significantly, if your fixture has a name like initial_data.json it will be automatically loaded when you do a syncdb, saving the trouble of importing it yourself.
Another benefit is that when you run manage.py test to run your Unit Tests the temporary test database will also have the Initial Data Fixture loaded.
Of course, this will work when when you're adding attributes to models and columns to the DB. If you drop a column from the Database you'll need to update your fixture to remove the data for that column which might not be straightforward.
This works best when doing lots of little database changes during development. For updating production DBs a manually generated SQL script can often work best.
I've been using django-evolution. Caveats include:
Its automatic suggestions have been uniformly rotten; and
Its fingerprint function returns different values for the same database on different platforms.
That said, I find the custom schema_evolution.py approach handy. To work around the fingerprint problem, I suggest code like:
BEFORE = 'fv1:-436177719' # first fingerprint
BEFORE64 = 'fv1:-108578349625146375' # same, but on 64-bit Linux
AFTER = 'fv1:-2132605944'
AFTER64 = 'fv1:-3559032165562222486'
fingerprints = [
BEFORE, AFTER,
BEFORE64, AFTER64,
]
CHANGESQL = """
/* put your SQL code to make the changes here */
"""
evolutions = [
((BEFORE, AFTER), CHANGESQL),
((BEFORE64, AFTER64), CHANGESQL)
]
If I had more fingerprints and changes, I'd re-factor it. Until then, making it cleaner would be stealing development time from something else.
EDIT: Given that I'm manually constructing my changes anyway, I'll try dmigrations next time.
django-command-extensions is a django library that gives some extra commands to manage.py. One of them is sqldiff, which should give you the sql needed to update to your new model. It is, however, listed as 'very experimental'.
So far in my company we have used the manual approach. What works best for you depends very much on your development style.
We generally have not so many schema changes in production systems and somewhat formalized rollouts from development to production servers. Whenever we roll out (10-20 times a year) we do a fill diff of the current and the upcoming production branch reviewing all the code and noting what has to be changed on the production server. The required changes might be additional dependencies, changes to the settings file and changes to the database.
This works very well for us. Having it all automated is a niche vision but to difficult for us - maybe we could manage migrations but we still would need to handle additional library, server, whatever dependencies.
Django 1.7 (currently in development) is adding native support for schema migration with manage.py migrate and manage.py makemigrations (migrate deprecates syncdb).

Resources