Web2Py tables broken migration - database

I tried adding a new table into one of my models in Web2Py. In addition I added a new field to an existing table. I tried loading a page that used those tables and it didn't work, claimed those things don't exist. Okay so I get migrations to False here.
db = DAL('sqlite://storage.sqlite',pool_size=1,check_reserved=['all'], migrate = False)
Reloaded the page, no change. Then I tried doing something like this in the tables it wouldn't understand
db.define_table(....,migrate=False,fake_migrate=True)
and I changed the DAL call to be
db = DAL(...,fake_migrate_all=True)
As the web2py manual said. Still no change. So then I said well okay I will have to dumb the whole database. So I took everything out of my database folder and I tried to reload it with a clean slate.
Now it just doesn't load at all.
According to database administration none of the tables exist although if I check again in the database folder they are all there. If I try to load the application it immediately reports that none of my called tables exist. I have all the code backed up on a repo but I can't uninstall the current app because I don't have that kind of read access on the server this is running on.
Is there anything I can do?
Edit: By the way this is happening on SQLite

Have you already tried, besides dumbs DB, clean up the database folder? If you do not do this, web2py will goes insane, cuz the files says that there are tables, but db don't. Besides, take a look in here, about fixing broken migrations and some caveats about sqllite.

Related

Tidy up DotNetNuke database tables

I've inherited the maintenance of a DotNetNuke (v6.2.0.1610) site, and one of the things I'd like to do is to tidy up the database tables being used.
It looks like there might have been two installations of DNN into the same database (I'm guessing, I don't know its history and cannot find out), I'm making this assumption because there are two sets of DotNetNuke tables.
For example, we have:
dbo.Portals, dbo.PortalSettings, dbo.Profile, dbo.Roles, etc.
However, then we also have the same set, prefixed with dnn_ -
dbo.dnn_Portals, dbo.dnn_PortalSettings, dbo.dnn_Profile, dbo.dnn_Roles, etc.
I spent a good while tearing my hair out when I could not get our portal to load, when I discovered it is because I was editing the dbo.PortalAlias table and I needed to be editing the dbo.dnn_PortalAlias table instead.
I wanted to avoid this future maintenance headache, so I backed up the database, and set about deleting all the tables without the dnn_ prefix (web.config specifies objectQualifier="dnn_"). I diligently ensured there was a matching dnn_ table before deleting any.
At first it seemed fine - the portal loaded and all the content was there, I thought I was on to a winner. However when I logged in and accessed the site admin section, that's when I started to get lots of error messages. So I figured I'd deleted too much, I restored the backup, and all is well - portal working again.
However, I really would like to get rid of the unnecessary tables, because no doubt at some point in the future I'll start doing some work on the database, forget about the dnn_ prefix and waste a bunch of time wondering why something isn't working.
So, as a bit of a DotNetNuke newbie, I'm after some help - how can I know what tables are in use, what aren't, and how can I set about tidying up the SQL Server tables? Thanks.
I suggest you to delete only the tables which have an equivalent with the "dnn_" prefix.
The DNN database should contain at least the "aspnet_" prefixed tables which are used for the authentication on the portals.
Then, you could have some extensions which could use tables without the "dnn_" prefix. It depends on the sql scripts that those extensions have used during their installation. I hope that those extensions don't run queries on the dnn tables without the "dnn_" prefix. Otherwise it could explain the errors you've encountered.
You could use the SQL Server Profiler to check it.
It turns out there was a view, dnn_Lists which was still referencing dbo.Lists without the dnn_ prefix.
I fixed this view and it's fine now.
(PS: Turns out that it's useful to set IsSuperUser = 1 in the users table for who you're logged in as, because then you get the full exception details and can fix it.)
Thanks
It would make sense to delete all tables WITHOUT "dnn_", but you said you got a problem.
If you have time and patience and is adamant about tiding things up, I would delete 1 table at a time and test the admin feature it broke last time until you find the culprit. That is a long shot, but that is how I would approach.
What might be happening here is that you may have a 3rd module installed that ignores the objectQualifier and when you deleted those tables, you then broke that module.

VS1010 Database Projects and reference data scripts

Just wondering the best way to handle the following....
I want to have a VS2010 database project to keep the schema of my database in the dev, integration test and production environments in sync.
As part of the test and production environments I have a lot of reference data that needs to be loaded into the database.
For dev and test I can just recreate the database and use Post Deployment scripts to load the data. However, I cant really do this for the production environment as obviously it will have live data on it.
So what is the best solution to do this? I dont think I can use Post Deployment scripts to load the datbase, because in the case of an insert statement I would need to wrap each one inside an IF NOT EXISTS... clause and there are 1000's of rows.
Maybe its best to use the VS2010 + MSBuild tools to keep the schema up to date and then have a seperate solution for managing the data?
Or is there a solution to this that uses purely the tools in VS2010 + MSBuild?
The best solution for live production enviroment - not to use automatic updates at all!
Use very well tested hand made update scripts in touch with your backend and frontend applications update
And there is always a good idea to have a fresh backup
How about truncating and rebuilding the reference data table each time? If there are constraints you can remove them and add them back at the end of the post-deployment script. Would that work for you?
Or is there a reason why you can't remove production reference data?
For reference data you can have a script that handles an insert, update or delete depending if the data is already in the table or not.
Check out this link for more details (this also includes a generator to help you generate your scripts).
Use a populated database to generate merge statements that can be applied in Post-Deployment.
It might be a good idea to take out the DELETE clause though.

Wordpress database migration

I've looked around the Wordpress forums about this and didn't find anything so I thought I might try here.
If you have a staging/dev Wordpress setup used for testing new pluging and such, how do you go about migrating the data in the staging database back to the production database? Is there a "Wordpress best practices" way to do this, or am I limited to having to manually migrate tables from one database to the other?
I have a script that mysqldumps a copy of my production Wordpress DB, restores it over my test Wordpress install & then corrects all the "production" settings & urls in the test DB.
Both my production & test databases live on the same server, but you could change the mysqldump settings to dump from a remote mysql server & restore to a local server quite easily.
Here are my scripts:
overwrite_test.coach_db_with_coache_db.sh
#!/bin/bash
dbUser="co*******"
dbPassword="*****"
dbSource="coach_production"
dbDest="coach_test"
tmpDumpFile="/tmp/$dbSource.sql"
mysqldump --add-drop-table --extended-insert --user=$dbUser --password=$dbPassword --routines --result-file=$tmpDumpFile $dbSource
mysql --user=$dbUser --password=$dbPassword $dbDest < $tmpDumpFile
mysql --user=$dbUser --password=$dbPassword $dbDest < /AdminScripts/change_coach_to_test.coach.sql
change_coach_to_test.coach.sql
-- Change all db references from #oldDomain to #newDomain
SET #oldDomain = 'coach.co.za';
SET #newDomain = 'test.coach.co.za';
SET #testUsersPassword = 'password';
UPDATE `wp_1_options` SET `option_value` = REPLACE(`option_value`,#oldDomain,#newDomain) WHERE `option_name` IN ('siteurl','home','fileupload_url');
UPDATE `wp_1_posts` SET `post_content` = REPLACE(`post_content`,#oldDomain,#newDomain);
UPDATE `wp_1_posts` SET `guid` = REPLACE(`guid`,#oldDomain,#newDomain);
UPDATE `wp_blogs` SET `domain` = #newDomain WHERE `domain` = #oldDomain;
UPDATE `wp_users` SET `user_pass` = MD5( #testUsersPassword );
-- Only valid for main wpmu site
UPDATE `wp_site` SET `domain` = #newDomain WHERE `domain` = #oldDomain;
Perhaps you are just looking for the wrong thing. Wouldn't a backup plugin handle this with ease? I know they exist for all the big CMS packages...
The two methods would be using the export/import feature under tools or copying the database. I email myself a copy of my production database weekly using the WordPress Database Backup plugin.
The import feature can be problematic for moving a wordpress blog as you have to configure your php.ini file often because the default value of files you can upload on a hosted php implementation tends to be too small by default.
I wanted to pull the database from my production wordpress website into an offline development copy of it on my desktop machine so I could modify the site and test it with a
full set of the existing blog content and history.
This proved to be problematic, as simply making an offline backup of the database and importing it into the local development database did not work.
Overcoming these problems in moving data from the production to the dev database can probably be used to go the other way as well - so I think you can just use these guidelines for what you want to do as well - just start with dev data and move it to prod.
The problems here were:
the permalink designations for the
blog posts are all stored in the
database as they would be for the
online version, but my offline copy
isn't at the domain address, instead
it is in the localhost directory. So
when I launch the site locally,
although the css formatting and
images are all in place (the image
links being relative), the actual
blog posts don't show up.
many of the links throughout the
site link back out to the internet,
so if you try to navigate to
archives, or comments, or
categories, or the main posts, you
get sent back out to the internet
instead of staying in the database
on the local machine.
To make sure I was doing this right, I blew away the wordpress install I had on my local machine and restarted from scratch.
Once I had a clean, new wordpress install and brand new default freshly created local database for it, I opened up the database in phpMyAdmin and took a look at the wp_posts
table. Inside there, each record (in other words, each post) has a column titled "guid", which shows the location of that post. For example, the first one in a fresh, default
install contains this "guid" value:
http://localhost/wordpress/?p=1
If you look in the wp_posts table of your online version, you'll see instead in this location the url to your site online.
You can't just import the tables wholesale into your local install, because you'll be importing all these outside references. It will make your local version impossible to navigate locally.
So, I created a backup copy of my online site's database and saved it locally as a .sql file. I then opened that file in a text editor (I used notepad++, a great piece of free software, but you could use any text editor). Things I needed to look out for:
For whatever reason, the tables on my
online site aren't just, for example,
"wp_posts" - they are
"wp_something_posts"... there are
some extra letters in there in the
table names.
Any references to http://... that contain my online url instead of localhost/wordpress
To keep it simple let's just do only the posts. In the backup copy of the .sql you've made of your online database, find the beginning of the wp_posts table. It will look something like this:
--
-- Table structure for table `wp_posts`
--
DROP TABLE IF EXISTS `wp_posts`;
CREATE TABLE `wp_posts` (
...and so on. Highlight everything above that up to just below the comment marking the beginning of the database at the top of the file (it will say -- Database: 'your database name') and delete it. Then go to the end of your wp_posts table, and delete everything after then end of it down to the bottom of the file. Now your file only contains your posts, and nothing else.
Save this as a separate document. Call it posts.sql or something like that.
Now, in this posts.sql file, you need to do two find/replaces actions.
Find every instance of the name of
the table wp_something_posts and
replace it with wp_posts. You only
need to do this if your backup copy
of your online database doesn't
match your clean local install as
far as the table names go. You want
whatever the table name is in this
file to match what your locally
installed wordpress database has as
this table name. If you don't make
these names match, you are just
going to end up importing the posts
into a new, differently named table,
which will be of no use to you at
all.
Find every instance of http://...
(replace the elipsis with your url)
and replace it with
http://localhost/wordpress (or
whatever the local url to your dev
version of the site is)
Now save this file again, to make sure you've got these changes set.
Now that you've done that, use phpMyAdmin to get into the wordpress database on your local machine, select the "import" tab and navigate the selector to the posts.sql file you just made, and then import it. This will pull all the data in that file into your local wp_posts table.
When that finishes, browse your local wordpress site. You'll see all your posts in there now. Hooray!
You may need to do something similar for a few other tables if you want to bring in your comments, tags, categories, and static pages you've created, etc.
I realize this is a convoluted process. There is probably a tool out there somewhere that makes this activity easier, and if someone knows of one I'd love to find out about it. If someone knows of a better way to do this manually than what I've described, I'd love to know that as well!
Until then, this is the way I figured out how to do it. Hopefully it helps get you going in the right direction.
You need to handle the serialized objects. Here is a client side HTML5 utility to handle it. Because it is all javascript, it's quite fast.
The alternative would be hooking a bash script into your deployment. So once the site is deployed, the db is backed up and deserialized with the new domain.
This about sums up the problems with the wordpress core architecture... but I wrote a plugin that solves the problems with domain names and absolute urls being stored in the database:
http://wordpress.org/extend/plugins/root-relative-urls/
This will solve the problems outlined by #oddbill. Though don't worry too much about the url being in the GUID column as that field is never used for link generation.
#markratledge provides a couple links to some lengthy documents that basically say this:
//export
mysqldump -u[username] -p[password] [database] > backup.sql
//import
mysql -u[username] -p[password] [database] < backup.sql
You'll want to exclude the comments/comments_meta tables if you push to production from staging so you don't lose all of your comments and trackbacks (#DavidLaing's approach will wipe those out.) And this assumes you only make content changes in your staging environment. If you want to make changes in production and your staging environment, you'll need to write scripts that sync the data instead of wholesale overwrite it... good luck on that task, may I suggest adding create & modified timestamp columns before you invest too much time with the current schema.
And finally, #RussellStuever's approach is suitable in most circumstances, just be sure to know when you are browsing your host mapped site versus your production site. And really be sure about it, because some browsers cache dns lookups for days until you physically close them and start a new process. At which point switching hosts may take some time, and switching frequently may get frustrating. And if you need to test with an iPhone, you'll need to publish the site live first, or use a good router that can remap outbound internet requests to local servers because you cannot modify hosts files on most mobile devices.
My plugin lets you develop and test from http://localhost/ or http://staging.server.local/ or http://www.production.com without any of the usual pitfalls. And then to migrate data, it's as simple as exporting and importing the data, no search & replace step or database setting tweaks necessary.
And don't rely on the import/export tool, it doesn't capture everything in typical wordpress installations, and still requires a needless search & replace step.

Database and version control system

I'm work on project with django framework and use control version system to synchronize my code with other peoples. But i don't know how organize work with database.
In django, any people, worked on project, may changes django models, and tell 'syncdb' to synchronize model objects with db.
But other people don't about this changes, and it's code revision may not works.
Please, tell me some ways to solve this problem (maybe, different db or something another).
Thanks, and excuse my english :)
You have to actually talk to the people on your project.
If someone changes any database model, they have to actually tell everyone else about the change. This is not a Django problem.
Think of any SQL database -- without Django. When the DBA drop's a table, they have to tell everyone that they changed the database. Otherwise all programs that use the table break.
The model definition is special, and whoever can change this must tell everyone else.
You must have an initial backup of the DB under verison control. And after that you have to put all the modification scripts on the same version control. Something like this:
/Database (in the repository)
Initial backup
Script1_date.sql
Script2_date.sql
...
I'm not sure to understand your problem; but remember that on Django, syncdb only creates new tables. It doesn't alter an existing table.
If, for example you just add a new field, a syncdb won't do anything.
Actually, looking at the alternatives, I'm often surprised that nobody mentions South
http://south.aeracode.org/
It seems to be the best migration app out there... perhaps I am missing something important, but I find it pretty nice to work with...
take also a look at deltasql.
you can test it at http://www.gpu-grid.net/deltasql (username: admin password: testdbsync )
and download from http://sourceforge.net/projects/deltasql
ciao :-)
I'm curious...what happens if you put your MDF and LDF files under source control? Of course if your tables are empty and you just have the structure of the database...
Sounds like you want migrations.
As an example:
http://www.aswmc.com/dbmigration/
You may also want to add functional unit tests that actually test that the schema is as expected, that way when the tests fail, you can see that it is a schema change, and audit whether it will affect other parts of the app. If it doesn't, fix your test to take into account the new schema.

Altering database tables in Django

I'm considering using Django for a project I'm starting (fyi, a browser-based game) and one of the features I'm liking the most is using syncdb to automatically create the database tables based on the Django models I define (a feature that I can't seem to find in any other framework).
I was already thinking this was too good to be true when I saw this in the documentation:
Syncdb will not alter existing tables
syncdb will only create tables for models which have not yet been installed. It will never issue ALTER TABLE statements to match changes made to a model class after installation. Changes to model classes and database schemas often involve some form of ambiguity and, in those cases, Django would have to guess at the correct changes to make. There is a risk that critical data would be lost in the process.
If you have made changes to a model and wish to alter the database tables to match, use the sql command to display the new SQL structure and compare that to your existing table schema to work out the changes.
It seems that altering existing tables will have to be done "by hand".
What I would like to know is the best way to do this. Two solutions come to mind:
As the documentation suggests, make the changes manually in the DB;
Do a backup of the database, wipe it, create the tables again (with syncdb, since now it's creating the tables from scratch) and import the backed-up data (this might take too long if the database is big)
Any ideas?
Manually doing the SQL changes and dump/reload are both options, but you may also want to check out some of the schema-evolution packages for Django. The most mature options are django-evolution and South.
EDIT: And hey, here comes dmigrations.
UPDATE: Since this answer was originally written, django-evolution and dmigrations have both ceased active development and South has become the de-facto standard for schema migration in Django. Parts of South may even be integrated into Django within the next release or two.
UPDATE: A schema-migrations framework based on South (and authored by Andrew Godwin, author of South) is included in Django 1.7+.
As noted in other answers to the same topic, be sure to watch the DjangoCon 2008 Schema Evolution Panel on YouTube.
Also, two new projects on the map: Simplemigrations and Migratory.
One good way to do this is via fixtures, particularly the initial_data fixtures.
A fixture is a collection of files that contain the serialized contents of the database. So it's like having a backup of the database but as it's something Django is aware of it's easier to use and will have additional benefits when you come to do things like unit testing.
You can create a fixture from the data currently in your DB using django-admin.py dumpdata. By default the data is in JSON format, but other options such as XML are available. A good place to store fixtures is a fixtures sub-directory of your application directories.
You can load a fixure using django-admin.py loaddata but more significantly, if your fixture has a name like initial_data.json it will be automatically loaded when you do a syncdb, saving the trouble of importing it yourself.
Another benefit is that when you run manage.py test to run your Unit Tests the temporary test database will also have the Initial Data Fixture loaded.
Of course, this will work when when you're adding attributes to models and columns to the DB. If you drop a column from the Database you'll need to update your fixture to remove the data for that column which might not be straightforward.
This works best when doing lots of little database changes during development. For updating production DBs a manually generated SQL script can often work best.
I've been using django-evolution. Caveats include:
Its automatic suggestions have been uniformly rotten; and
Its fingerprint function returns different values for the same database on different platforms.
That said, I find the custom schema_evolution.py approach handy. To work around the fingerprint problem, I suggest code like:
BEFORE = 'fv1:-436177719' # first fingerprint
BEFORE64 = 'fv1:-108578349625146375' # same, but on 64-bit Linux
AFTER = 'fv1:-2132605944'
AFTER64 = 'fv1:-3559032165562222486'
fingerprints = [
BEFORE, AFTER,
BEFORE64, AFTER64,
]
CHANGESQL = """
/* put your SQL code to make the changes here */
"""
evolutions = [
((BEFORE, AFTER), CHANGESQL),
((BEFORE64, AFTER64), CHANGESQL)
]
If I had more fingerprints and changes, I'd re-factor it. Until then, making it cleaner would be stealing development time from something else.
EDIT: Given that I'm manually constructing my changes anyway, I'll try dmigrations next time.
django-command-extensions is a django library that gives some extra commands to manage.py. One of them is sqldiff, which should give you the sql needed to update to your new model. It is, however, listed as 'very experimental'.
So far in my company we have used the manual approach. What works best for you depends very much on your development style.
We generally have not so many schema changes in production systems and somewhat formalized rollouts from development to production servers. Whenever we roll out (10-20 times a year) we do a fill diff of the current and the upcoming production branch reviewing all the code and noting what has to be changed on the production server. The required changes might be additional dependencies, changes to the settings file and changes to the database.
This works very well for us. Having it all automated is a niche vision but to difficult for us - maybe we could manage migrations but we still would need to handle additional library, server, whatever dependencies.
Django 1.7 (currently in development) is adding native support for schema migration with manage.py migrate and manage.py makemigrations (migrate deprecates syncdb).

Resources