I am preferring to manually migrate my tables in Django. Because using automated tools puts me in a place where I cannot see the impact. With impact, I mean the the time it takes the db get in synch with my models. Below is a simple example:
class User(models.Model):
first_name = CharField(..)
Let's say I want to add this:
class User(models.Model):
first_name = CharField(..)
last_name = CharField(..)
I will follow the these steps in my production server:
Disable site traffic.
Manually connect to the your DB server, let's say MySQL and add a field to the User table named last_name (make sure it is sync with the SQL generated for the new Model, of course.)
Update your model.
Upload new files, restart traffic.
I have two questions for this scenario:
Is this a preferred/acceptable way for manual db migration in Django?
If I just add a field with a specific default value to the User table by SQL manually, but don't update the model, will I still get DatabaseIntegrity exception?
Thanks in advance,
With all of the schema migration tools, such as south, there are ways of explicitly defining how your models get migrated. The benefits of using a tool such as this are:
Your migrations are stored in your version control system
There's a documented procedure to roll back schema migrations
If another developer joins your project, you can refer that person to the south documentation rather than explaining your own hacky solution to documenting schema migrations.
I think I should just emphasize a point here: Though south has automigration tools, you don't have to use automigration if you're using South.
Is this a preferred/acceptable way for manual db migration in Django?
I would answer no. As #Mike said Django has a reliable and fairly versatile ecosystem of migration tools, the most prominent of which is South. #Mike's answer has the details right.
To answer your second question:
If I just add a field with a specific default value to the User table by SQL manually, but don't update the model, will I still get DatabaseIntegrity exception?
No. Your models will continue to function normally. Of course if you want to do something with the new fields using Django's ORM you'll be better off adding them to the model class.
A side effect of this is that you can migrate legacy database tables by selectively choosing the fields to use in your models.
Related
We are using Entity Framework Code First, and have our models in C#.
One of our DBAs added an additional column, how do we migrate his change into our Net Core Project? Is there a command line to automatically sync this?
We know command line below will take a whole database and place into C#. We are only interested in small modified changes.
Scaffold-DbContext "Server=(localdb)\mssqllocaldb;Database=Blogging;Trusted_Connection=True;" Microsoft.EntityFrameworkCore.SqlServer -OutputDir Models
That's all you've got. Using an existing database is an all or nothing affair. Either you manage your entities via code and migrate changes there back to your database, or you scaffold the entities from your database. In other words, a database change means you need to re-scaffold everything. There's no such thing as "code migration".
Alternatively, I suppose you could just manually modify the entity class. For simple changes like adding a new column, that's probably the best approach, as long as you take care to replicate the property exactly as it should be to match the column in the database.
EDIT (based on comments on question)
Okay, so if you're doing code-first, then no one should be manually touching the database ever. Period. You have to just make that a rule. If the DBA wants to make a change, he can communicate that to your team, where you can make the appropriate code change and hand back a SQL script to perform the migration. This is where DevOps comes into play. Your DBA should be part of your team and you all should be making decisions together. If you can't do that, this isn't going to work.
Add column to the model.
Generate migration.
Migration file 20180906120000_add_dba_column.cs will be generated
Manually add new record about just created migration in table _EFMigrationHistory
record: MigrationId: 20180906120000_add_dba_column, ProductVersion: <copy from previous migration>
If you want do it automatically, means you want combine two different approaches in same time.
I think there was a reason why EF team separated "EF Code First" and "EF Database first" approaches.
When you choose EF Code first approach, that means that source of truth is your application code.
If you want stick to have code-first approach, that mean you should train your DBA to write EF migrations.
I'm planning to develop a web application in CakePHP that shows information in graphics and cards. I chose CakePHP because the information that we need to show is very structured, so the model approach makes easier to manage data; also I have some experience with MVC from ASP.NET and I like how simple is to use the routing.
So, my problem is that the multiple organizations that could use the app would have their own database with a different schema that the one we need. I can't just set their string connection in the app.php file because their database won't match my model.
And the organization datasource couldn't fit my model for a lot of reasons: the tables don't have the same name, the schema is different, the fields of my entity are in separated tables, maybe they have the info in different databases or also in different DBMS!
I want to know if there's a way to make an interface that achieves this
In such a way that cakephp Model/Entity can use data regardless of the source. Do you have any suggestions of how to do that? Does CakePHP have an option to make this possible? Should I use PHP with some kind of markup language like JSON or XML? Maybe MySQL has an utility to transform data from different sources into a view and I can make CakePHP use the view instead of the table?
In case you have an answer be as detailed as you can.
This other options are possible if it's impossible to make the interface:
- Usw another framework that can handle this easier and has the features I mentioned above.
- Make the organization change their database so it matches my model (I don't like this one, and probably they won't do it).
- Transfer the data in the application own database.
Additional information:
The data shown in graphics are from students in university. Any university has its own database with their own structure and applications using the db, that's why isn't that easy to change structure. I just want to make it as easy as possible to any school to configure their own db.
EDIT:
The version is CakePHP 3.2.
An important appointment is that it doesn't need all CRUD operations, only "reading". Hope that makes the solution easier.
I don't think your "question" can be answered properly, it doesn't contain enough information, not enough details. I guess there is something that will stay the same for all organizations but their data and business logic will be different. But I'll try it.
And the organization datasource couldn't fit my model for a lot of reasons: the tables don't have the same name, the schema is different, the fields of my entity are in separated tables, maybe they have the info in different databases or also in different DBMS!
Model is a whole layer, so if you have completely different table schemas your business logic, which is part of that layer, will be different as well. Simply changing the database connection alone won't help you then. The data needs to be shown in the views as well and the views must be different as well then.
So what you could try to do and what your 2nd image shows is, that you implement a layer that contains interfaces and base classes. Then create a Cake plugin for each of the organizations that uses these interfaces and base classes and write some code that will conditionally use the plugin depending on whatever criteria (guess domain or sub-domain) is checked. You will have to define the intermediate interfaces in a way that you can access any organization the same way on the API level.
And one technical thing: You can define the connection of a table object in the model layer. Any entity knows about it's origin but you should not implement business logic inside an entity nor change the connection through an entity.
EDIT: The version is CakePHP 3.2. An important appointment is that it doesn't need all CRUD operations, only "reading". Hope that makes the solution easier.
If that's true either use the CRUD plugin (yes, you can use only the R part of it) or write some code, like a class that describes the organization and will be used to create your table objects and views on the fly.
Overall it's a pretty interesting problem but IMHO to broad for a simple answer or solution that can be given here. I think this would require some discussion and analysis to find the best solution. If you're interested in consulting you can contact me, check my profile.
I found a way without coding any interface. In fact, it's using some features already included in the DBMS and CakePHP.
In the case that the schema doesn't fit the model, you can create views to match de table names and column names from the model. By definition, views work as a table so CakePHP searches for the same table name and columns and the DBMS makes the work.
I made a test with views in MySQL and it worked fine. You can also combine the data from different tables.
MySQL views
SQL Server views.
If the user uses another DBMS you just change the datasource in app.php, and make the views if it's necessary
If the data is distributed in different DBMS, CakePHP let's you set a datasource for each table, you just add it to app.php and call it in the table if it's required.
Finally, in case you just need the "reading" option, create a user with limited access to the views and only with SELECT privileges.
USING:
CakePHP 3.2
SQL SERVER 2016
MySQL5.7
My team currently has several beta customers using our product. The current method of upgrading a customer's database to the latest version consists of, re-initializing the database, and re-creating the customers configuration by hand, which isn't a lot, but is certainly tedious and will change as we implement some kind of migration strategy.
My question is, is it possible to use flyway (or some other tool) to manage database schema migrations of all instances of our product, yet retain independent instance data? What is the best approach to this kind of problem.
Yes, you can use Flyway for this.
You can place the customer-specific reference data in a separate location per customer.
You can then configure flyway.locations like this:
Customer A: flyway.locations=scripts/ddl,scripts/data/customer_a
Customer B: flyway.locations=scripts/ddl,scripts/data/customer_b
I'm working on a Django app that interacts with an existing database (think ERP/transaction type data) to perform analysis. There will be minimal/no updating of the existing database mainly reading data in. Its just a simple small setup so no replication etc. issues to think about re. updating.
The analysis would result in new records created within the Django Model.
Currently the existing DB runs on PostgreSQL.
I am aware of Alex Gaynor's GSOC multidb code which, from what I gather is ticket #1142 which has no patch yet to trunk.
So from what I gather there are three options I can see:
1) Point Django db to the same db as the ERP and let it create the tables it needs within it (all the ERP tables have a prefix so there would be no collision) however this strikes me as hackey and a recipe for disaster.
2) Create a new db for Django and automatically copy over the required tables. Better but I cant update, thought I can probably live with this.
3) Try out the multidb patch.
Are there other better ideas out there? I'm leaning towards at least trying out the multidb patch but I'm a little worried about stability and forwards compatibility.
How about not using Django's ORM layer at all for that DB? It the interaction is minimal, you might do it faster by just using direct SQL with the appropriate postgresql-python library.
I have an issue where I'm creating a greenfield web application using ASP.NET MVC to replace a lengthy paper form that manually gets (mostly) entered into an existing SQL Server 2005 database. So the front end is the new part, but I'm working against an existing moderately normalized schema. I can easily add new tables, views, etc. to the schema, but modifying tables is going to be near impossible. There's currently at least 2 existing applications (that I'm aware of) that reference this schema and I've stumbled upon at least a dozen "SELECT * FROM..." statements in each. They exist both in code and in views/triggers/stored procs/etc. That's why modifying existing table schemas is a no-go.
All that being said, the form targets different fields in multiple tables in database. It also has to be dynamic enough to allow the end users to add new questions targeting fields. The end users have a rough idea of the existing database schema so they're savvy enough to know how to pick out tables/fields to be targeted.
I'm have a really rough idea of how I could tackle this, but it seems like complete overkill and will be difficult to write up. I'm hoping somebody might have a simple(r) way of handling this sort of project that I haven't thought of.
If users know DB schema maybe you should go with Dynamic Data project and just create a web app front end of that DB to them. So you would only make the model they need and do the application that will display data from those tables with insert/edit capabilities.
But it's completely different story if they have some additional functionality to it.