Makemigrations and auto_now_add - django-models

Django 1.9.7
PostgreSQL
Could you help me understand what should I do?
The option auto_now_addand default are mutually exclusive.
This is just a bit confusing. I'm in favour of pressing 1? But just decided to ask for some piece of advice.
Model
creation_date = models.DateField(auto_now_add=True)
Traceback
michael#michael:~/workspace/photoarchive$ python manage.py makemigrations
You are trying to add a non-nullable field 'creation_date' to place without a default; we can't do that (the database needs something to populate existing rows).
Please select a fix:
1) Provide a one-off default now (will be set on all existing rows)
2) Quit, and let me add a default in models.py

Related

Oracle Sample HR schema :: specify default tablespeace for HR as parameter 2

I have downloaded Oracle Database Sample Schemas from GitHub and I'm following the official guide about how to import the HR schema manually.
Oracle SQL Developer prompt me with this banner: Enter value for 2:
Which is caused by the lines:
PROMPT specify default tablespeace for HR as parameter 2:
DEFINE tbs = &2
Now.
Being an official guide for newbies I expect at least an explication about the value I need to enter.
Especially because Oracle is providing the data so they must know if the default value should be 2, 200, 2000, etc..
What should I enter there?
A number or a string?
Are these Kb or Mb?
You need to supply the name of a TABLESPACE; this is specified in the instructions in the README.md file in the Github repository:
Use your current SYSTEM and SYS passwords, and also your actual default and temporary tablespace names. The passwords for the new HR, OE, PM, IX, SH and BI users will be set to the values you specify.
If you have not got an existing TABLESPACE that you want to use then you need to create one using the CREATE TABLESPACE command.
I.e. one example is:
CREATE TABLESPACE tbs_01 DATAFILE 'tbs_f2.dat' SIZE 40M ONLINE;
Then, if you didn't have an existing tablespace you wanted to use and had used the above command, the value you would need to provide is: tbs_01

progress - syntax to modify FIELD

I added field to Progress databyse by
ADD FIELD filedName on TABLEName...
and now I want to change/modify this field (PRECISION or FORMAT or something else...)
What syntax will be correct ? I tried like this:
UPDATE FIELD
MODIFY FIELD
ALTER FIELD
I tried aldo sql notation: alter table
but nothing works.
Could You help me please with syntax to modify field ?
If you are using the 4GL engine (you are using _progres or prowin32 to start a session) then you want to use the "data dictionary" tool to create DDL. You run "dict.p" to access that tool. i.e.: _progres dbName -p dict.p
This will allow you to create tables, define fields and indexes etc. If you want to export the definitions you use the "admin" sub-menu to dump a ".df" file. You can manually edit the output but you need to know what you are doing. It is mostly obvious but it is not documented or supported.
Do NOT imagine that using SQL from within a 4GL session will work. It will not. The 4GL engine internally supports a very limited subset of sql-89. It is mostly there as a marketing ploy. There is nothing but pain and agony down that road. Don't go there. If you are using _progres or prowin32 you are using the 4gl engine.
If you are using SQL92 externally (via sqlexp or some other 3rd party SQL tool that uses an ODBC or JDBC connection) then normal SQL stuff should work but you might want to spend some quality time with the documentation to understand the areas where OpenEdge differs from Oracle or Microsoft or whatever sql dialect you are used to.
Tom, thanks for Your answer.
I use OpenEdge Release 10.1A02 on Linux.
I can make a dump.df file and I can also add new table from file (similar df).
But why I cant modify any added fields ? Ofcorse i can use "p" editor and do it manually from menu Tools/Data Editor/Schema and add new table but it's risky if I tell database administrators to do it manually on each enviroment (specially on production).
if exists syntax:
ADD FIELD filedName on TABLEName...
why there is no
Modify FIELD filedName on TABLEName... ?
Bartek.
Just in case - here are some working examples of .df files in OE 11.3 (could be they are valid in other versions too):
Rename column:
RENAME FIELD "OldName" OF "TableName" TO "NewName"
Other properties:
UPDATE FIELD "FieldName" OF "TableName"
FORMAT "Yes/No"
LABEL "Label"
VALMSG "Validation message..."
Of course the database must be shut down first (apply those changes in single-user mode).

Insert data from another DB in tables

I'm having some issue here. Let me explain.
So I was about done with migration of this project and I've decided to run the test suite to make sure the logic was still working as expected. Unfortunately, it didn't... but that's not the issue.
At the end of the suite, there was a nice script that execute a delete on the datas of 5 tables of our developement database. That would be fine if there was also a script to actually populate the database...
The good side is that we still have plenty of data in production environement, so I'm looking for a way and/or possibly a tool to extract the data on these 5 particular tables in production and insert them in dev environement. There is all sort of primary and foreign key between these tables, maybe auto-increment fields, (and also A LOT of data) that's why I don't want to do it manually.
Our database is db2 v9 if it makes any difference. I'm also working with SQuirreL, there might be a plugin, but I haven't found yet.
Thanks
This is sort of a shot in the dark, as I've never used db2, but from previous experience, my intuition immidiately says "Try csv". I'm willing to bet my grandmother you can import / export csv-files in your software ( why did i just start thinking of George from Seinfeld? )
This should also leave you with FKs and IDs intact. You might have to reset your auto increment value to whatever is appropriate, if need be. That, of course, would be done after the import
In addittion, csv files are plaintext and very easily manipulated should any quirks show their head.
Best of luck to you!
Building on Arve's answer, DB2 has a built-in command for importing CSV files:
IMPORT FROM 'my_csv_file.csv'
OF del
INSERT INTO my_table
You can specify a list of columns if they are not in the default order:
IMPORT FROM 'my_csv_file.csv'
OF del
-- 1st, 2nd, 3rd column in CSV
METHOD P(1, 2, 3)
INSERT INTO my_table
(foo_col, bar_col, baz_col)
And you can also specify a different delimiter if it's not comma-delimited. For example, the following specifies a file delimited by |:
IMPORT FROM 'my_csv_file.csv'
OF del
MODIFIED BY COLDEL|
-- 1st, 2nd, 3rd column in CSV
METHOD P(1, 2, 3)
INSERT INTO my_table
(foo_col, bar_col, baz_col)
There are a lot more options. The official documentation is a bit hairy:
DB2 Info Center | IMPORT command
Do you have access to the emulator? there's a function in the emulator that allows you to import CSV into tables directly.
Frank.
Personally, I am not aware of any automated tools that can "capture" a smaller subset of your production data into a test suite, but in my day, I was able to use QMF and some generic queries to do just that. It does require forward planning / analysis of your table structures, parent-child dependencies, referential integrity and other things.
It did take some initial work to do, but once it was done, I was able to use, and re-use these tools to extract several different views of production data for my testing purposes.
If this appeals to you, read on.
On a high-level view, you could do this:
Determine what the key column names are.
Create a "keys" table for them.
Write several queries to look for your test conditions and populate the keys_table.
Once you are satisfied that keys_table has a satisfactory subset of keys, then you can use your created tools to strip out the data for you.
Write a generic query that joins the keys_table with that of your production tables and export the data into flat files.
Write a proc to do all the extractions / populations for you automatically.
If you have access to QMF (and you probably do in a DB2 shop), you may be able to do something like this:
Determine all of the tables that you need.
Determine the primary indexes for those tables.
Determine any referential integrity requirements for those tables.
Determine Parent - Child relationships between all the tables.
For the lowest level child table (typically the one with most indexes) note all the columns used to identify a unique key.
With the above information, you can create a generic query to strip out a smaller subsection of production data, for #5. In other words, you can create a series of specific queries and populate a small Key table that you create.
In QMF, you can create a generic query like this:
select t.*
from &t_tbl t
, &k_tbl k
where &cond
order by 1, 2, 3
In the proc, you simply pass the tablename, keys, and condtions variables. Once the data is captured, you EXPORT the data into some filename.
You can create an EXPORT_TABLE proc would look something like this:
run query1 (&&t_tbl = students_table , &&k_tbl = my_test_keys ,
+ &&cond = (t.stud_id = k.stud_id and t.course_id = k.course_id)
export data to studenttable
run query1 (&&t_tbl = course_table , &&k_tbl = my_test_keys ,
+ &&cond = (t.cour_id = k.cour_id
+ (and t.cour_dt between 2009-01-01 and 2010-02-02)
export data to coursetable
.....
This could capture all the data as needed.
You can then create an IMPORT_TEST proc to do the opposite:
import data from studenttable
save data as student_table (replace = yes
import data from coursetable
save data as course_table (replace = yes
....
It may take a while to create, but at least you would then have a re-useable tool to extract your data.
Hope that helps.

migrating django-model field-name change without losing data

I have a django project with a database table that already contains data. I'd like to change the field name without losing any of the data in that column. My original plan was to simply change the model field name in a way that would not actually alter the name of the db table (using the db_column column parameter):
The original model:
class Foo(models.Model):
orig_name = models.CharField(max_length=50)
The new model:
class Foo(models.Model):
name = models.CharField(max_length=50, db_column='orig_name')
But, running South's schemamigration --auto produces a migration script that deletes the original column, orig_name, and adds a new column, name, which would have the unwanted side effect of deleting the data in that column. (I'm also confused as to why South wants to change the name of the column in the db, since my understanding of db_column was that it enables a change to the model field name without changing the name of the database table column).
If I can't get away with changing the model field without changing the db field, I guess I could do a more straight forward name change like so:
The original model:
class Foo(models.Model):
orig_name = models.CharField(max_length=50)
The new model:
class Foo(models.Model):
name = models.CharField(max_length=50)
Regardless of which strategy I end up using (I would prefer the first, but would find the second acceptable), my primary concern is ensuring that I don't lose the data that is already in that column.
Does this require a multi-step process? (such as 1. adding a column, 2. migrating the data from the old column to the new column, and 3. removing the original column)
Or can I alter the migration script with something like db.alter_column?
What is the best way to preserve the data in that column while changing the column's name?
Changing the field name while keeping the DB field
Adding an answer for Django 1.8+ (with Django-native migrations, rather than South).
Make a migration that first adds a db_column property, and then renames the field. Django understands that the first is a no-op (because it changes the db_column to stay the same), and that the second is a no-op (because it makes no schema changes). I actually examined the log to see that there were no schema changes...
operations = [
migrations.AlterField(
model_name='mymodel',
name='oldname',
field=models.BooleanField(default=False, db_column=b'oldname'),
),
migrations.RenameField(
model_name='mymodel',
old_name='oldname',
new_name='newname',
),
]
Django 2.0.9 (and onwards) can automatically detect if a field was renamed and gives an option to rename instead of delete and create a new one
(same works for Django 2.2)
Initial answer
Posting, if it's still helpful for someone.
For Django 2.0 + simply rename the field in the model
class Foo(models.Model):
orig_name = models.CharField(max_length=50)
to
class Foo(models.Model):
name = models.CharField(max_length=50)
Now run python manage.py makemigrations
It'll generate migration with operations for removing the old field and adding the new one.
Go ahead and change that to following.
operations = [
migrations.RenameField(
model_name='foo',
old_name='orig_name',
new_name='name')
]
Now run python manage.py migrate it'll rename the column in DB without losing data.
It is quite easy to fix. But you will have to modify the migration yourself.
Instead of dropping and adding the column, use db.rename_column. You can simply modify the migration created by schemamigration --auto
Actually with Django 1.10, just renaming the field in the model and then running makemigrations, immediately identifies the operation (ie. one field disappeared, another appeared in its stead):
$ ./manage.py makemigrations
Did you rename articlerequest.update_at to articlerequest.updated_at (a DateTimeField)? [y/N] y
Migrations for 'article_requests':
article_requests/migrations/0003_auto_20160906_1623.py:
- Rename field update_at on articlerequest to updated_at
I've run into this situation. I wanted to change the field names in the model but keep the column names the same.
The way I've done it is to do schemamigration --empty [app] [some good name for the migration]. The problem is that as far as South is concerned, changing the field names in the model is a change that it needs to handle. So a migration has to be created. However, we know there is nothing that has to be done on the database side. So an empty migration avoids doing unnecessary operation on the database and yet satisfies South's need to handle what it considers to be a change.
Note that if you use loaddata or use Django's test fixture facility (which uses loaddata behind the scenes). You'll have to update the fixtures to use the new field name because the fixtures are based on the model field names, not the database field names.
For cases where column names do change in the database, I never recommend the use db.rename_column for column migrations. I use the method described by sjh in this answer:
I have added the new column as one schemamigration, then created a datamigration to move values into the new field, then a second schemamigration to remove the old column
As I've noted in a comment on that question, the problem with db.rename_column is that it does not rename constraints together with the column. Whether the issue is merely cosmetic or whether this mean a future migration may fail because it cannot find a constraint is unknown to me.
It is possible to rename a field without doing any manual migration file editing:
▶︎ Start with something like this:
class Foo(models.Model):
old_name = models.CharField(max_length=50)
▶︎ Add db_column=OLD_FIELD_NAME to the original field.
class Foo(models.Model):
old_name = models.CharField(max_length=50, db_column='old_name')
▶︎ Run: python3 manage.py makemigrations
▶︎ Rename the field from OLD_FIELD_NAME to NEW_FIELD_NAME
class Foo(models.Model):
new_name = models.CharField(max_length=50, db_column='old_name')
▶︎ Run: python3 manage.py makemigrations
You will be prompted:
Did you rename MODEL.OLD_FIELD_NAME to MODEL.NEW_FIELD_NAME (a ForeignKey)? [y/N] y
This will generate two migration files rather than just one, although both migrations are auto-generated.
This procedure works on Django 1.7+.
I ran into this situation on Django 1.7.7. I ended up doing the following which worked for me.
./manage.py makemigrations <app_name> --empty
Added a simple subclass of migrations.RenameField that doesn't touch the database:
class RenameFieldKeepDatabaseColumn(migrations.RenameField):
def database_backwards(self, app_label, schema_editor, from_state, to_state):
pass
def database_forwards(self, app_label, schema_editor, from_state, to_state):
pass
UPDATE In Django 3.1 it is quite simple for changing only one field at a time.
In my case:
The old field name was: is_admin
The new field name was: is_superuser
When I make migrations by python manage.py makemigrations it asked me do I want to rename the field or not. And I just hit y to rename. Then I migrate by python manage.py migrate. The terminal history in my case looks like:
NOTE: I did not test with more than one field at a time.
This is for Django 4.0.
Let's do this with an example.
My original field name was anticipated_end_date, I need to name it tentative_end_date. Follow the following steps to complete the operation
Change the anticipated_end_date to tentative_end_date inside the model
run python manage.py makemigrations. Ideally, it would show the following message
Was your_model_name.anticipated_end_date renamed to your_model_name.tentative_end_date (a DateField)? [y/N]
If it shows this message, then just press y and you are good to migrate, as it will generate the correct migration. However, if makemigrations command does not ask about renaming the model field, then go inside the generated migration and change the operations content the following way:
operations = [
migrations.RenameField(
model_name='your_model_name',
old_name='anticipated_end_date',
new_name='tentative_end_date',
),
]
Now you can run python manage.py migrate.
This way your model field/DB column will be renamed, and your data will not be lost.
As pointed out in the other responses, it is now quite easy to rename a field with no changes on the database using db_column. But the generated migration will actually create some SQL statements. You can verify that by calling ./manage.py sqlmigrate ... on your migration.
To avoid any impact on your database you need to use SeparateDatabaseAndState to indicate to Django that it doesn't need to do something in DB.
I wrote a small article about that if you want to know more about it.
1.Edit the field name on the django model
2.Create an empty migration like below:
$ python manage.py makemigrations --empty testApp (testApp is your application name)
Edit the empty migration file which is created recently
operations = [
migrations.RenameField('your model', 'old field’, 'new field'),
]
Apply the migration
$ python manage.py migrate
Database column name will be altered with new name.

How do I version a SQL Server Database?

I need to put versions onto a SQL Server 2005 database and have these accessible from a .NET Application. What I was thinking is using an Extended Properties on the Database with a name of 'version' and of course the value would be the version of the database. I can then use SQL to get at this. My question is does this sound like a good plan or is there a better way for adding versions to a SQL Server database?
Lets assume I am unable to use a table for holding the Metadata.
I do this:
Create a schema table:
CREATE TABLE [dbo].[SchemaVersion](
[Major] [int] NOT NULL,
[Minor] [int] NOT NULL,
[Build] [int] NOT NULL,
[Revision] [int] NOT NULL,
[Applied] [datetime] NOT NULL,
[Comment] [text] NULL)
Update Schema:
INSERT INTO SchemaVersion(Major, Minor, Build, Revision, Applied, Comment)
VALUES (1, 9, 1, 0, getdate(), 'Add Table to track pay status')
Get database Schema Version:
SELECT TOP 1 Major, Minor, Build from SchemaVersion
ORDER BY Major DESC, Minor DESC, Build DESC, Revision DESC
Adapted from what I read on Coding Horror
We use the Extended Properties as you described it and it works really well.
I think having a table is overkill. If I want to track the differences in my databases I use source control and keep all the db generation scripts in it.
I've also used some ER diagram tools to help me keep track of changes in DB versions. This was outside the actual application but it allowed me to quickly see what changed.
I think it was CASEStudio, or something like that.
If I understand your question right (differentiating between internal database versions, like application build numbers), you could have some sort of SYSVERSION table that held a single row of data with this info.
Easier to query.
Could also contain multiple columns of useful info, or multiple rows that represent different times that copy of the database was upgraded.
Update: Well, if you can't use a table to hold the metadata, then either external info of some sort (an INFO file on the hard drive?) or extended properties would be the way to go.
I still like the table idea, though :) You could always use security to only make it accessable through a custom stored proc get_ db_version or something.
The best way to do is to have 2 procedures: one header to control what is being inserted and validations a footer to insert the data if the release is good or not. The body will contain your scripts.
You need a wrapper that will encapsulate your script and record all the info: as far release, script number been applied, applyby, applydate date, release outcome "failed or succeeded".
I am using dedicated table similar to Matt's solution. In addition to that, database alters must check current version before applying any changes to the schema. If current version is smaller than expected, then the script terminates with fatal error. If current version is larger than expected, then script skips current step because that step has already been performed sometimes in the past.
Here is the complete solution with examples and conventions in writing database alter scripts: How to Maintain SQL Server Database Schema Version

Resources