I'm not very familiar with cake.. So here's my questions.. we're developing an app on mysql, but it may eventually need to deploy to mssql or oracle. How do we make sure that we won't have strange problems with our primary keys? In mysql they are AUTO INCREMENT columns but IIRC in oracle you need to use sequences... is there a way to make this a transparent change? Am I over thinking it?
Does anyone have experience with switching database vendors on a cakephp app? any pointers or things to keep an eye out for?
In the Cake database config file you choose your driver (see http://book.cakephp.org/view/40/Database-Configuration). Then, if you set your PK (which will also be your A_I column if using MySQL) with the fieldname id, Cake will automatically handle the auto_increment insertion. I would presume (NB: haven't tried Cake w/ something else) that Cake will take care of A_I columns in something like Oracle.
Cake uses its own DB abstraction layer -- but the included abstractions cover quite a bit, and it will perform as specified (i.e. it'll handle your auto increment stuff for you).
In short, you're probably overthinking. That said, I would mock up a little cake app, then try switching databases behind it (change your db config and your app should automagically switch over).
HTH,
Travis
The following practices work great for me
I use cake schemas ( I tend to set up 1 schema file for each group of models. I.E. User, Role, Profile might all be in one UsersSchema file )
Also take a look at using the debuggable.com FixturesShell - it allows you to import test case fixtures into the live database. Great for setting up that initial group of users and roles from the schema file.
Also, if you set your 'id' field to VARCHAR(36) instead of INT(#) cake will automatically use UUID style id's. This means you have a FAR FAR lower chance of your data having id value collisions if you need to move the data to another application or server.
The fixtures shell also has a command line tool for generating uuids ( so you can add them to your $records variable in the fixture for insertion etc. )
In summary - Use the CakeSchema schemas shell, the fixtures shell from debuggable.com and UUID values for your id's and it should give you a portable structure creation tool, a portable data insertion tool, and a portable id field format.
http://github.com/felixge/debuggable-scraps/tree/fd0e5ad625cb21f5ba16e6b186821a5774089ac7/cakephp/shells/fixtures
http://api.cakephp.org/class/schema-shell
You need to be using "cake schema" to manage your DB.. This will handle all the DB specific stuff when you create your database.
http://book.cakephp.org/view/735/Generating-and-using-Schema-files
Related
I'm actually looking to map and import an existing database into a symfony 6 project.
I know we can do this by using this command :
php bin/console doctrine:mapping:import "App\Entity" annotation --path=src/Entity
But, this database is very huge and have a lot of tables. I don't want them all.
Do you know a way to "select" the tables i want to map. I know the tables that i don't want start with " _ " or " inv_ ". Perhaps there is a way to have a "where" clause ?
There is a --filer= option in doctrine:mapping:import but I think it's not what you're looking for.
If you migrate your codebase to symfony and doctrine and doctrineORM wasn't used before - it would be much easier just to start over and do all by your self. Yes, it's tedious, especially with a huge database, but you will end up with much "cleaner" entities and during this phase you could decide which tables to ignore.
But if you still want try to "import" somehow, consider following steps:
In your local developer environment, create a copy of your database. Just the schema without data. So you have all tables, but they're empty (how to with mysqldump Don't forget to use --no-data)
drop all other tables which you don't want
rename some, if you think, their names wouldn't fit to doctrine's naming convention.
switch to that copy-database in your .env (change db name in DATABASE_URL)
Now try to import again with doctrine:mapping:import. You may need to adjust some tables by repeating step 2) and 3) and then try to import again.
If import succeed, and you have a bunch of entities, now comes the boring and tedious part. You have to manually check all classes in src/Entity.
Depending on your Database (mysql, postgreSQL, sqlite, etc) not all column-types will be exact what you want.
Furthermore, most many-to-one/one-to-many relations and all many-to-many junction tables will be probably converted to standalone Entities like src\CategoryToProduct.php - which isn't right. So you have to delete them and recreate your relations by hand
If you happen to have a diagram of your database done with MySQL Workbench, you can use these tools to export your diagram into working entities :
First you setup this with composer: https://github.com/mysql-workbench-schema-exporter/mysql-workbench-schema-exporter
And then this : https://github.com/mysql-workbench-schema-exporter/doctrine2-exporter
Then you run the following command to get to know more how to use it :
php composer require --dev mysql-workbench-schema-exporter/doctrine2-exporter
You have plenty of options to parameter desired output, entities namespace and so on, everything is described in the second github rep if you want to use the doctrine2 ORM export format.
I'm working with Django and I added a new model variable meaning that I need another column in my sqlite3 data base.
I have heard that I'm supposed to use sqlite> , but I am really confused when I start to use it. So, if that is part of the solution, can you be very specific on what to do?
thanks
MORE INFO:
my app is called "livestream" & and my class is "Stream"
I added the model "channel"
returns ---->
DatabaseError: table livestream_stream has no column named channel
You can ALTER TABLE to add a new column in Sqlite3 but not rename it nor drop it. Sqlite3 is a very useful database for bootstrapping your app. But sooner or later, you will need to change to a more robust/flexible database engine, say MySql or Postgresql.
Every time you add a new column to your models using Sqlite, you will need to recreate the schema (as far as I know, when you do migrations with Sqlite to add new columns, south complaints. see below). An approach I like more is use MySql with Django-South from the beginning, where I'm not sure about every aspect of my database.
Django South is an app for doing database migrations. It's very useful and the docs are a good starting point for beginners.
Every time you should make modifications to your database, you should consider them as migrations and use South.
Hope this helps!
Im currently developing a servlet homepage (spring + hibernate + mysql).
Im at the moment using the Hibernate property hibernate.hbm2ddl.auto set to update.
This is working fine and Hibernate creates and updates my tables.
However, Ive have read on multiple places that this is not recommended in production and that it is unsafe.
But if I dont put this option my tables is not created, and I really don't want to create my tabels manually on the server. I got limited time working on this alone.
How is this usually done? It's seems like it is quite much work to add all tables manually imo.
In production, you typically have already existing tables with a large amount of data that you don't want to lose, and that you want to migrate to the new schema. Hibernate can't do that automagically for you. It doesn't know that the data that was previously in column A must now be in the new column B.
So you'll need to create a migration script. Of course, you can use Hibernate to generate the new schema for you in development, see what the differences with the old schema are, and create your script thanks to that. But yes, having an app in production and migrate it needs some work to be done.
Some days I love my dba's, and then there is today...
In a Grails app, we use the database-migration plugin (based on Liquibase) to handle migrations etc.
All works lovely.
I have been informed that there is a set of db administrative meta data that we must support on every table. This information has zero use to the app.
Now, I can easily update my models to accommodate this. But that answer is ugly.
The problem is now at each migration, Liquibase/database-migration plugin, complains about the schema and the model being out of sync.
Is there anyway to tell Liquibase (or GORM) that columns x,y,z are to be ignored?
What I am trying to avoid is changesets like this:
changeSet(author: "cwright (generated)", id: "1333733941347-5") {
dropColumn(columnName: "BUILD_MONTH", tableName: "ASSIGNMENT") }
Which tries to bring the schema back in line with the model. Being able to annotate those columns as not applying to the model would be a good thing.
Sadly, you're probably better off defining your own mapping block and taking control of the Data Mapper (what Hibernate essentially is) yourself at this point. If you need to take control of the way the database-integration plugin handles migrations, you might wanna look at the source or raise an issue on the JIRA. Naively, mapping your columns explicitly in the domain model should allow you to bypass unnecessary columns from the DB.
I'm considering using Django for a project I'm starting (fyi, a browser-based game) and one of the features I'm liking the most is using syncdb to automatically create the database tables based on the Django models I define (a feature that I can't seem to find in any other framework).
I was already thinking this was too good to be true when I saw this in the documentation:
Syncdb will not alter existing tables
syncdb will only create tables for models which have not yet been installed. It will never issue ALTER TABLE statements to match changes made to a model class after installation. Changes to model classes and database schemas often involve some form of ambiguity and, in those cases, Django would have to guess at the correct changes to make. There is a risk that critical data would be lost in the process.
If you have made changes to a model and wish to alter the database tables to match, use the sql command to display the new SQL structure and compare that to your existing table schema to work out the changes.
It seems that altering existing tables will have to be done "by hand".
What I would like to know is the best way to do this. Two solutions come to mind:
As the documentation suggests, make the changes manually in the DB;
Do a backup of the database, wipe it, create the tables again (with syncdb, since now it's creating the tables from scratch) and import the backed-up data (this might take too long if the database is big)
Any ideas?
Manually doing the SQL changes and dump/reload are both options, but you may also want to check out some of the schema-evolution packages for Django. The most mature options are django-evolution and South.
EDIT: And hey, here comes dmigrations.
UPDATE: Since this answer was originally written, django-evolution and dmigrations have both ceased active development and South has become the de-facto standard for schema migration in Django. Parts of South may even be integrated into Django within the next release or two.
UPDATE: A schema-migrations framework based on South (and authored by Andrew Godwin, author of South) is included in Django 1.7+.
As noted in other answers to the same topic, be sure to watch the DjangoCon 2008 Schema Evolution Panel on YouTube.
Also, two new projects on the map: Simplemigrations and Migratory.
One good way to do this is via fixtures, particularly the initial_data fixtures.
A fixture is a collection of files that contain the serialized contents of the database. So it's like having a backup of the database but as it's something Django is aware of it's easier to use and will have additional benefits when you come to do things like unit testing.
You can create a fixture from the data currently in your DB using django-admin.py dumpdata. By default the data is in JSON format, but other options such as XML are available. A good place to store fixtures is a fixtures sub-directory of your application directories.
You can load a fixure using django-admin.py loaddata but more significantly, if your fixture has a name like initial_data.json it will be automatically loaded when you do a syncdb, saving the trouble of importing it yourself.
Another benefit is that when you run manage.py test to run your Unit Tests the temporary test database will also have the Initial Data Fixture loaded.
Of course, this will work when when you're adding attributes to models and columns to the DB. If you drop a column from the Database you'll need to update your fixture to remove the data for that column which might not be straightforward.
This works best when doing lots of little database changes during development. For updating production DBs a manually generated SQL script can often work best.
I've been using django-evolution. Caveats include:
Its automatic suggestions have been uniformly rotten; and
Its fingerprint function returns different values for the same database on different platforms.
That said, I find the custom schema_evolution.py approach handy. To work around the fingerprint problem, I suggest code like:
BEFORE = 'fv1:-436177719' # first fingerprint
BEFORE64 = 'fv1:-108578349625146375' # same, but on 64-bit Linux
AFTER = 'fv1:-2132605944'
AFTER64 = 'fv1:-3559032165562222486'
fingerprints = [
BEFORE, AFTER,
BEFORE64, AFTER64,
]
CHANGESQL = """
/* put your SQL code to make the changes here */
"""
evolutions = [
((BEFORE, AFTER), CHANGESQL),
((BEFORE64, AFTER64), CHANGESQL)
]
If I had more fingerprints and changes, I'd re-factor it. Until then, making it cleaner would be stealing development time from something else.
EDIT: Given that I'm manually constructing my changes anyway, I'll try dmigrations next time.
django-command-extensions is a django library that gives some extra commands to manage.py. One of them is sqldiff, which should give you the sql needed to update to your new model. It is, however, listed as 'very experimental'.
So far in my company we have used the manual approach. What works best for you depends very much on your development style.
We generally have not so many schema changes in production systems and somewhat formalized rollouts from development to production servers. Whenever we roll out (10-20 times a year) we do a fill diff of the current and the upcoming production branch reviewing all the code and noting what has to be changed on the production server. The required changes might be additional dependencies, changes to the settings file and changes to the database.
This works very well for us. Having it all automated is a niche vision but to difficult for us - maybe we could manage migrations but we still would need to handle additional library, server, whatever dependencies.
Django 1.7 (currently in development) is adding native support for schema migration with manage.py migrate and manage.py makemigrations (migrate deprecates syncdb).