How to organize a database in Django with multiple, distinct apps? - database

I'm new to Django (and databases in general), and I'm not sure how to structure the following. The sources of data I'll have for my site are:
a blog
for a few different games:
a high score list
user-created levels
If I were storing the data in ordinary files, I'd just have one file for each of the above. In Django, ideally (I think) I'd have a separate database for each of these, but apparently multiple database support isn't there for Django yet. I'm worried (unnecessarily?) about keeping everything in one database for two reasons:
If I screw something up in one of the sections, I don't want to mess up the rest of the data.
When I'm working on one of these sections, I'd like the freedom to easily change the model around. Since I've learned that syncdb doesn't, in fact, sync the database, I've decided that the easiest thing to do when messing around with a model is to simply wipe the database and start over. Again, I'm worried about messing up the other sections. I looked at south, and it seems like more trouble than it's worth during the planning stages of an app (but I'll reconsider later when there's actually valuable data).
Part of the problem is that I'm not really comfortable keeping my data in a binary format. I'm used to text, so I can easily diff it, modify it in an editor, etc., without going through some magical database interface (I'm using postgresql, by the way).
Are my fears unfounded? How do people normally handle this problem?

For what it's worth, I totally understand your frustration as I went through a really similar thought process when starting. Unfortunately, there isn't much you can do (easily, anyway) besides get familiar with the tools you'll be using.
First of all, you don't need multiple databases for what you're doing - one will do. Each app will create its own tables in the database which are somewhat isolated from one another. As czarchaic mentioned, you can do python manage.py reset app_name to reset just one of them in case you change your model. You will lose that data, though.
To get data in relatively easy to work with format, you can use the command python manage.py dumpdata > file_name.json, and then to reload it later python manage.py loaddata file_name.json. You can use this for backups, local test data, or as a poor man's migration (hint: South would be easier).
Yet another option is to use a NoSQL database for any data you think will need extra flexibility. Just keep in mind that Django doesn't have support for any of these at the moment. That means no no admin support or ModelForms. Of course, having a model may become unnecessary.

In short, your fears are unfounded. You should "organize" your database by project to use the Django term. Each model in each app will have it's own table, but they will all be in the same database. Putting them in a separate database isn't a good idea for a whole host of reasons, the biggest is that you cannot query across databases.
While I agree that south is probably a bit heavy for your initial design/dev stages it should be considered seriously for anything even resembling a beta version and absolutely necessary in production.
If you're going to be messing with your models a bunch during development the best thing to do is use fixtures to load data in quickly after running sync. Or, if you are going to be changing a bunch of required fields, then write some quick Python to create dummy data for you.
As for not trusting your data to binary, a simple "pg_dump " will get you a textual version of your data. It sounds to me like you're working on your app against live production data, which is a mistake. Your goal should be to get your application built, working, and tested on fake data or at least a copy of your production data and then when you're sure everything is golden migrate it into production. This is where things like south come in handy as you can automate this deployment and it will help you handle any database table/column changes you need to make.
I'm sure all of this sounds like a pain, but all of it is able to be automated and trust me it makes your life down the road much much easier.

I generally just reset the module
>>> python manage.py reset blog
this will reset all tables in INSTALLED_APPS.blog
I'm not sure if this answers your question but it's much lest destructive than wiping the DB.

Syncdb should really only be used for development. That's why it doesn't really matter if you wipe the tables and start again, perhaps exporting look up data into a json file that you can import each time you sync.
When your site reaches production however, you have a little more work. Any changes you make to your models that need to be reflected in the database, need to be emitted as SQL and run manually on the database. There's a django-admin.py function to emit the suggested SQL, which you can use to build up a script to run on the database to migrate it. Like you mention, a migrations app like South can be beneficial here but it's not strictly needed.
As far as your separation of sites goes, run them as separate sites/projects. You can have a separate settings file per project which allows you to run two different databases. This is in contrast to running the two sites as separate apps within the same project. If they're totally separate they probably shouldn't be in the same project unless you need to leverage common code.

Related

CMS Database design - Master database or Multi-Db per site

I am in process of designing my CMS that I am about to create. I was thinking about the database and how I want to go by approaching it.
Do you think its best to create 1 master database for all my clients websites? or Should I have 1 database per site?
What is the benefits and negatives on both approaches? I am always thinking about the future so I was thinking about implementing memcache or APC cache to the project, to offer an option to my client.
Just trying to learn the best practices and what other developers apporach would be
I've run both. My business chooses to separate client-specific data into separate tables so that if one happens to go corrupt, not all are taken down. In an ideal world this might never happen, but murphy's law....It does seem very easy to find things with them separated. You will know with 100% certainty that one client's content will never show up on another's page.
If you do go down that route, be prepared to create scripts that build and configure databases for you. There's nothing fun about building a great system and having demand for it, only to spend your time manually setting up DB's and installs all day long. Also, setting db names is one additional step that's not part of using a single db table--it's a headache that will repeat itself seemingly over and over again.
Develop the single master DB. It will take a small amount of additional effort and add a little bit more complexity to the database design, but will give you a few nice features. The biggest is being able to share data between sites.
Designing for a master database means that you have the option to combine sites when it makes sense, but also lets you install a master per site. Best of both worlds.
It depends greatly upon the amount of customization each client will require. If you forsee clients asking for many one-off features specific to their deployment, separate databases based off of a single core structure might make sense. I would highly recommend trying to make any customizations usable by all clients though, and keep all structure defined in one place/database instead of duplicating it across multiple databases. By using one database, you make updating the structure straightforward and the implementation consistent across all sites so they can all use the same CMS code.

Database Refresh

How often do you refresh your development database from production database?Since there are many types of projects (targeting different domains) I would like to know how it is being done and at what intervals(days/months/years) it is being done ?
thanks
While working at Callaway Golf we had an automated build that would completely refresh the database from a baseline. This baseline would be updated (from production) almost daily. We had a set up scripts (DTS) that would do this for us. So if there was some new and interesting information we could easily do it a couple times of day, once a week, etc. The key here is automation to perform the task. If it is easy to do then when it is done is really only dependent on how performing the task impacts the load on the production database, the network, and the amount of time it takes to complete it. This could of course be set up as a schedule task to run at off peak hours and before the dev team gets in in the morning.
The key things in refreshing your development database are:
(1) Automate the refresh through a script.
(2) Obfuscate the data in your development database since you do not want your developers to see the real data or you could do some sampling of your production database.
(3) Decide the frequency of the refresh -- I usually do it once a week.
Depends on what kind of work you're doing. If you're debugging issues that are closely related to the data, then frequent updates are good.
If you're doing data Quality Assurance (which often involves writing code to detect and repair it, that you have to develop and test away from the production server), then you need extremely fresh data. The bad data that is the most valuable to fix is the data that was just inserted or updated today.
If you are writing client code, then infrequent updates are good. Often when I'm writing C# UI code, I could care less what the data is, I just care if it shows up in the right box on the screen.
If you have data with any security issues, you should stop using production data--i.e. never update from production--and get a good sample data generator. Writing a good sample data generator is hard, so 3rd party products are the way to go. MS Data Dude comes to mind, and I recommend Sql RedGate's data generator.
And finally, how hard is it to get a copy of the production data? If it is cheap and automatable, just get a new copy every night. If it is expensive (requires the attention of a very busy DBA), well, resource constraints might answer the question for you regardless to these other concerns.
We tend to refresh every couple of days, or perhaps once a week or so if things are "normal," though if we're investigating something amiss we may do so more much more often.
Our production database is about 1GB, so it's not a trivial thing to copy around. Also, for us there's generally no burning need to get current data from production into the dev systems.
The "how" is simply a MySQL "backup" and "restore"
In a lot of cases, refreshing the dev database really isn't that important. Often production systems have far more data that required for development, and working with such a large dataset can be a hassle for several reasons. Examples include development on the interface, where it's more important to have some data instead of anything specific. In this case, it's more customary to thin out the production database to a smaller subset of real data. After you do this once, it's not really that important to update, as long as all schema changes are pushed through the dev database(s).
On the other hand, performance bugs may often require production-sized databases to be able to reproduce and identify bottlenecks, so in this scenario it is extremely useful to have an almost-realtime database. Many issues may only present themselves with the exact data used in production.
We tend to always go back to an on-demand schedule. We have many different databases that are used in a suite of applications. We stay away from automatic DEV databases b/c many of our code changes involve database changes and I don't want anything overwritten.
Not at all, the dev databases (one per developer) get setup by a script or similar a couple of times a day, possibly a couple hundred times when running db tests locally. This includes a couple of records for playing around
Of course we still need a database with at least parts of production in it, for integration and scalability tests. We are aiming for a daily automated refresh. But we aren't there yet.

Using a common database for collaborative development

Some of the people in my project seem to think that using a common development database with everyone connecting to it is the best thing. I think that it isn't and each developer having his own database (with periodic updated data dumps) is the best. Am I right or wrong? Have you encountered any problems in any of these approaches?
Disk space and CPU should be cheap enough that every developer can run their own instance of the database, with an automated build under version control. This is needed to allow developers to be bold in hacking on the database, in isolation from any other developer's concurrent hacking.
The caveat being, of course, that any changes they make to their private instance are useless to anyone else unless it can be automatically applied during the build process. So there needs to be a firm policy that application code can't depend on any database state unless that state is represented by version-controlled, unit-tested changes to the DDL.
For an excellent guide on the theory and practice of treating the database definition as another part of the project code, and coordinating changes and refactorings, see Refactoring Databases: Evolutionary Database Design by Scott W. Ambler and Pramod Sadalage.
I like having my own copy of the database for development, because it gives you the flexibility to rapidly change things without worrying how it will impact others.
However, if all the developers are hacking away on their own copy of the database, it becomes more and more difficult to merge everyone's work together in the end.
I think you can get the best of both worlds by letting developers work on a local copy during day-to-day development, but each developer should probably merge their work into a common copy on a pretty regular basis. Writing a lot of unit tests helps too.
We share a single database amongst all our developer (20-odd) but we've got it structured so that everyone has their own tables.
You don't need a separate database per developer if you structure the application right. It should be configurable which database or table-prefix it uses anyway so you can easily move it between instances (unit test, system test, acceptance test, production, disaster recovery and so on).
The advantage to using a single database is that the cost of maintenance is amortized. You don't have your DBAs trying to handle a lot of databases (or, if you're a small-DB shop, you don't have every developer trying to maintain their own database when they're better utilized in developing).
Having a single point of Failure is not a good thing isn't it?
I prefer a single, shared database. But it's very dependent on the situation and the applications being developed.
What works for me may not work for you. Go with your gut.
If you are working with Hibernate or any hibernate-based platform you can configure your database to be created when you start your server (create-drop option). This is very useful when you are adding new attributes to your classes. If this is the case each developer must have his own copy of the DB.
If you are not changing the DB structure at all then you can use a single shared DB.
In this second case is not a must. I prefer to have my own DB where I can do whatever I want. On the other hand remember that some queries can take a lot of time and this will affect your whole team if you are sharing a DB.

Nonrelational Databases for C++

I was thinking of starting a project that very clearly needs a persistent store. I was about to reluctantly decide on a RDBMS, when I came across an article which briefly mentions CouchDB. Seems some advancements in DB technology have happened since I last looked, so I thought I would ask here about databases before I got into it.
Here are my criteria. ( I list the criteria again at the end, so if you want to skip the explanations just scroll down. )
The project is open source and I will not be asking anything for it, so preferably the database is open source and free. Furthermore the software has to run on both Linux and Windows.
There are parts of the project that have to be in C++. The project is not large enough code wise to justify using a second language. So basically the whole thing will be C++.
This project will not have anything to do with the web, so preferably
the database will not require the detritus of a web library.
The objects I want to store fall into one of two categories: a basic object and a container object. The difference being objects which are containers will contain even more objects, ie: a parts of parts problem. I need a database that can handle such cases cleanly and efficiently.
I also expect the schema to evolve rapidly, at least initially. I alse suspect that some of the old data simply will not fit into the new schemas. So I would like to keep different versions of the schema around. Win possible, I would like to be able to transform data in one to schema into another schema.
For the application to work the way intended, people would have to exchange large chunks of database with each other. So I would want simple ways of importing and exporting data, which I could automate to some degree.
Finally it would be nice if the database could in someway be simulated in unit tests.
THose are my requirements. I have replicated them below to make it easier for people answering.
Thank you
Non Technical requirements
1. Open source preferably free.
2. Run on Windows and Linux
Has a C++ interface.
Is able to handle a non-web application, preferably without REST.
Can handle a "parts of parts" problem fairly well.
Can handle multiple indexes.
Has sort of concept of schema version, can handle multiple schema versions, and can migrate tables from one schema to another.
Should have a simple mechanism for move data from one instance of the database to another.
Preferably has some mechanism for testing.
HDF5 is a binary format which behaves like an hierarchical database. It has binding and libraries for C++ and python (I only use the latter) and it is used to store big amounts of data, like the ones produces in certain physics and astronomy experiments.
http://www.hdfgroup.org/HDF5/
I've looked at a few nosql databases some time ago (had an different requirement than than you though - needed it to be a standalone server). The ones that I remember as particularly interesting are Redis and Kyoto Cabinets. Have a look.
BTW, you don't mention any performance requirement. If so, have you considered SQLite? Simple, embedded, stable, and with the flexibility of SQL after all. With prepared statement the performance penalty of SQL should not be very high.
EDIT: ooops, just noticed that you asked this more than a year ago... Well, perhaps you can tell us what you've chosen :)

Version track, automate DB schema changes with django

I'm currently looking at the Python framework Django for future db-based web apps as well as for a port of some apps currently written in PHP. One of the nastier issues during my last years was keeping track of database schema changes and deploying these changes to productive systems. I haven't dared asking for being able to undo them too, but of course for testing and debugging that would be a great feature. From other questions here (such as this one or this one), I can see that I'm not alone and that this is not a trivial problem. Also, I found many inspirations in the answers there.
Now, as Django seems to be very powerful, does it have any tools to help with the above? Maybe it's even in their docs and I missed it?
There are at least two third party utilities to handle DB schema migrations, South and Django Evolution. I haven't tried either one, but I have heard some good things about South, though Evolution has been around a little longer.
Also, look at SchemaEvolution on the Django wiki. It is just a wiki page about migrating the db.
Last time I checked (version 0.97), syncdb will be able to add tables to sync your DB schema with your models.py file, but it cannot:
Rename or add a column on a populated DB. You need to do that by hand.
Refactorize your model (like split a table into two) and repopulate your DB accordingly.
It might be possible though to write a Django script to make the migration by playing with the two different managers, but that might take ages if your DB is large.
There was a panel session on DB schema changes at the recent DjangoCon; there is a video of the session (thanks to Google), which should provide some useful information on a number of these utilities.
And now there's also dmigrations. From announcement:
django-evolution attempts to address this problem the clever way, by detecting changes to models that are not yet reflected in the database schema and figuring out what needs to be done to bring the two back in sync. In contrast, dmigrations takes the stupid approach: it requires you to explicitly state the changes in a sequence of migrations, which will be applied in turn to bring a database up to the most recent state that reflects the underlying models.
This means extra work for developers who create migrations, but it also makes the whole process completely transparent—for our projects, we decided to go with the simplest system that could possibly work.
(My bold)
I heard lot of good about Django Schema Evolution Branch and those were opions of actual users. It mostely works out of the box and do what it should do.
U should lookup Dmigrations, it functions a little bit diffrent from django-eveoltions.
It shows you everything it is doing and for compliccated things it asks you for your intervetnions. It should be great.

Resources