how to resolve circular reference in Django model? - django-models

I'm converting an old app from ASP.NET + MSSQL to Django + Postgres. The existing design looks like this:
create table foo
( id integer
, name varchar(20)
, current_status_id integer null
)
create table foo_status
( id integer
, foo_id integer
, status_date datetime
, status_description varchar(100)
)
So each foo has a multiple foo_history records, but there is a denormalized field, current_status_id, that points to the last history record.
To convert the data, I just defined foo.current_status_id as an IntegerField, not as a ForeignKey, because Postgres would (correctly) gripe about missing foreign keys no matter which table I loaded first.
Now that I've converted the data, I'd like to have all the foreign-key goodness again, for things like querying. Is there a good way to handle this besides changing the model from IntegerField before I do a syncdb to ForeignKey afterward?

A few points about how django works:
./manage.py syncdb does not modify existing tables. You can modify your model fields and run syncdb, but your db will stay intact. If you need do need this functionality, use south.
When creating a new instance x with a myfk ForeignKey field, and setting it's x.myfk_id by assigning it an integer and x.save()ing it, the constraint is checked only on the db level: Django will not throw an exception if the referenced records are missing. Therefore, you can first create the tables without the constraints (either by using IntegerFields+syncdb as you suggested, or carefully running a modified .manage.py sqlall version of the ForeignKeys version), load your data, and then ALTER TABLE your db manually.

Related

Access to SQL link table showing Allow Zero Length as 'Yes'

I have made multiple attempts at creating a SQL Server table to link to an Access front-end with 'disallow zero length value' constraints on the table but when I link the table up to my front-end the design of the table shows Allow Zero Length = Yes
Now I have tried various methods of trying to change this to No (I need it to be No for a migration project I am working on). I am not sure what needs to be done on the SQL Server to ensure that upon linking this table to my access front-end, this constraint is a No.
This is the Create script for my table:
Create Table Riku(
ID int NOT NULL PRIMARY KEY,
testtext varchar(255),
CONSTRAINT DissalowNulls
CHECK (testtext <> ''),
CONSTRAINT DissalowNull2
CHECK (LEN(testtext) = 0)
);
Neither of these two constraints work. I have tried using Nvarchar, Varchar, and Text as SQL Data Type all of which yielded this same result (Yes).
Any ideas?
You must indicate to the column that does not allow null
Create Table Riku(
ID int NOT NULL PRIMARY KEY,
testtext varchar(255) NOT NULL,
CONSTRAINT DissalowNulls
CHECK (testtext <> '')
);
I am interpreting your question as:
"Why is it when I create a constraint in SQL Server to 'DisalowNulls'
does it not appear that way when viewing the table properties in
Access?"
My answer to that questions is, "they are not syntax equivalent features". When Access interprets the design of the linked table it is not perceiving these as the same property. The constraint you created in SQL Server is more equivalent to an Access Validation Rule although that will also not appear in the Access table designer.
It would be nice if Access would disable properties that aren't relevant to the database type of the linked table. Other properties like Format, Input Mask, and Caption could also be in that category.

Complex refactor and version control with Database Projects

Let's say I have a table like so:
CREATE TABLE Foo
(
Id INT IDENTITY NOT NULL PRIMARY KEY,
Data VARCHAR(10) NOT NULL,
TimeStamp DATETIME NOT NULL DEFAULT GETUTCDATE()
);
Now let's say I build this in a SQL Server Database Project, and I publish this in version 1.0 of my application. The application is deployed, and the table is used as expected.
For the 1.1 release, the product owners decide they want to track the source of the data, and this will be a required column going forward. For the data that already exists in the database, if the Data column is numeric, they want the Source to be 'NUMBER'. If not, it should be 'UNKNOWN'.
The table in the the database project now looks like this:
CREATE TABLE Foo
(
Id INT IDENTITY NOT NULL PRIMARY KEY,
Data VARCHAR(10) NOT NULL,
Source VARCHAR(10) NOT NULL,
TimeStamp DATETIME NOT NULL DEFAULT GETUTCDATE(),
);
This builds fine, but deploying an upgrade would be a problem. This would break if data exists in the table. The generated script will create a temporary table, move data from the old table into the temp one, drop the old table, and rename the temp table to the original name... but it won't if there's data in that table, because it would fail to assign values to the non-nullable column Source.
For trivial refactors, the refactor log tracks changes in the schema, and maintains awareness of the modified database objects, but there doesn't seem to be a way to do this when you get your hands a little dirty.
How can the Database Project be leveraged to replace the default script for this change with a custom one that properly captures the upgrade logic? There must be some way to address this issue.

Maintaining an audit trail on specific entitities (tables) in SQL Server, using triggers (and a stored proc)

I want to maintain an audit trail for some tables in a SQL Server database. For these tables, I want to record all CRUD activities on the entity (object) stored in the table. This is the table schema I have come up with so far. For updates to an entity, please note that I am storing detailed data showing the fields that were changed, in a sub (related) table. To be able to store different values in the same column, I am storing the text representations of the data type of the field.
I would want some feedback on:
The table design (can it be improved?, are there any gotchas to be aware of with this design)?
I would like to write a trigger (per monitored table), so that I can provide an audit trail for CRUD operations on the monitored entity. I am new to triggers, so an example of how I can use a trigger to record CRUD operations on entity Foo would be very useful. Being the lazy bugger that I am, I want to keep the SQL DRY, so I would like all of the triggers to call a common stored proc that takes the (what,why, when) as parameters.
Code:
CREATE TABLE Foo (
id INTEGER NOT NULL PRIMARY KEY,
field_1 REAL,
field_2 VARCHAR(256) NOT NULL,
field_3 BIT,
field_4 IMAGE
field_5 VARCHAR(256), -- Path to an image file
field_6 DATE,
field_7 TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
field_8 INTEGER REFERENCES Personnel(id) ON UPDATE CASCADE ON DELETE NO ACTION
);
-- store CRUD actions info (who, what, when, why) on specific tables
CREATE TABLE GenericRecordArchive (
id INTEGER NOT NULL PRIMARY KEY,
personnel_id INTEGER REFERENCES Personnel(id) ON UPDATE CASCADE ON DELETE NO ACTION, -- who
monitored_object_id INTEGER REFERENCES MonitoredObject(id) ON UPDATE CASCADE ON DELETE NO ACTION, -- what ...
crud_type_id INTEGER REFERENCES CrudType(id) ON UPDATE CASCADE ON DELETE NO ACTION,
reason TEXT NOT NULL, -- why
created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP -- when
);
-- Detailed archive info for updates, stored changed fields (for updates only)
CREATE TABLE GenericRecordArchiveInfo (
id INTEGER NOT NULL PRIMARY KEY,
archive_record_id INTEGER REFERENCES GenericRecordArchive(id) ON UPDATE CASCADE ON DELETE NO ACTION,
field_name VARCHAR(128) NOT NULL,
old_value TEXT,
new_value TEXT,
created_at TIMESTAMP
);
Notes:
For the sake of simplicity, I have not included indices etc in the schema above
Although I am working on SQL Server, I may want to port this to some other database in the future, so wherever possible, I would like the SQL to be as db agnostic as possible (unless there are huge efficiency gains, parsimony of code etc. by keeping the SQL code SQL Server centric).

How do I get MS LightSwitch to recognize my View?

I've created a View from a table in another database. I have dbo rights to the databases so viewing and updating is not a problem. This particular View did not have an "id" column. So I added one to the View by using ROW_NUMBER. Now I had a problem with a table, in the same database, not showing up in LightSwitch but that was solved by changing the id column to be NOT NULL. I haven't done any real manipulation in LightSwitch. I'm still in the Import Your Data Source stage (ie. very beginning).
This View, in LightSwitch, is going to be read-only. No updating or deleting. From what I've read, LightSwitch needs a way to determine the PK of a Table or View. It either reads it from the schema (column set as a PK) or finds a column set as NOT NULL and uses that as the PK. Well I can't seem to do either of those things in SQL Server or LightSwitch, so I am stuck as to how to get LightSwitch to "see" my View.
for lightswitch to see your view you must have a primary key on a column of the table your are selecting from.
Example:
create table tbl_test
(
id int identity primary key not null,
value varchar(50)
)
create view vw_test
as
select *
from tbl_test
note:sometimes when you edit the primary key column in the view select statement it may cause lightswitch to not see it
Example:
create view vw_test
select cast(id as varchar(50) id,...
lightswitch would not see the table
Hope this was helpful! :)
What I do in this case is create a view with an ID column equal to the row number. Ensure the column you're basing the ID on is not null using the isnull() or coalesce() functions.
Example:
create view as
select distinct ID = row_number() over (order by isnull(Name,'')),
Name = isnull(Name,'')
from My_Table

Creating a SQL Server trigger to transition from a natural key to a surrogate key

Backstory
At work where we're planning on deprecating a Natural Key column in one of our primary tables. The project consists of 100+ applications that link to this table/column; 400+ stored procedures that reference this column directly; and a vast array of common tables between these applications that also reference this column.
The Big Bang and Start from Scratch methods are out of the picture. We're going to deprecate this column one application at a time, certify the changes, and move on to the next... and we've got a lengthy target goal to make this effort practical.
The problem I have is that a lot of these applications have shared stored procedures and tables. If I completely convert all of Application A's tables/stored procedures Application B and C will be broken until converted. These in turn may break applications D, E, F...Etc. I've already got a strategy implemented for Code classes and Stored Procedures, the part I'm stuck on is the transitioning state of the database.
Here's a basic example of what we have:
Users
---------------------------
Code varchar(32) natural key
Access
---------------------------
UserCode varchar(32) foreign key
AccessLevel int
And we're aiming now just for transitional state like this:
Users
---------------------------
Code varchar(32)
Id int surrogate key
Access
---------------------------
UserCode varchar(32)
UserID int foreign key
AccessLevel int
The idea being during the transitional phase un-migrated applications and stored procedures will still be able to access all the appropriate data and new ones can start pushing to the correct columns -- Once the migration is complete for all stored procedures and applications we can finally drop the extra columns.
I wanted to use SQL Server's triggers to automatically intercept any new Insert/Update's and do something like the following on each of the affected tables:
CREATE TRIGGER tr_Access_Sync
ON Access
INSTEAD OF INSERT(, UPDATE)
AS
BEGIN
DIM #code as Varchar(32)
DIM #id as int
SET #code = (SELECT inserted.code FROM inserted)
SET #id = (SELECT inserted.code FROM inserted)
-- This is a migrated application; find the appropriate legacy key
IF #code IS NULL AND #id IS NOT NULL
SELECT Code FROM Users WHERE Users.id = #id
-- This is a legacy application; find the appropriate surrogate key
IF #id IS NULL AND #code IS NOT NULL
SELECT Code FROM Users WHERE Users.id = #id
-- Impossible code:
UPDATE inserted SET inserted.code=#code, inserted.id=#id
END
Question
The 2 huge problems I'm having so far are:
I can't do an "AFTER INSERT" because NULL constraints will make the insert fail.
The "impossible code" I mentioned is how I'd like to cleanly proxy the original query; If the original query has x, y, z columns in it or just x, I ideally would like the same trigger to do these. And if I add/delete another column, I'd like the trigger to remain functional.
Anyone have a code example where this could be possible, or even an alternate solution for keeping these columns properly filled even when only one of values is passed to SQL?
Tricky business...
OK, first of all: this trigger will NOT work in many circumstances:
SET #code = (SELECT inserted.code FROM inserted)
SET #id = (SELECT inserted.code FROM inserted)
The trigger can be called with a set of rows in the Inserted pseudo-table - which one are you going to pick here?? You need to write your trigger in such a fashion that it will work even when you get 10 rows in the Inserted table. If a SQL statement inserts 10 rows, your trigger will not be fired ten times - one for each row - but only once for the whole batch - you need to take that into account!
Second point: I would try to make the ID's IDENTITY fields - then they'll always get a value - even for "legacy" apps. Those "old" apps should provide a legacy key instead - so you should be fine there. The only issue I see and don't know how you handle those are inserts from an already converted app - do they provide an "old-style" legacy key as well? If not - how quickly do you need to have such a key?
What I'm thinking about would be a "cleanup job" that would run over the table and get all the rows with a NULL legacy key and then provide some meaningful value for it. Make this a regular stored procedure and execute it every e.g. day, four hours, 30 minutes - whatever suits your needs. Then you don't have to deal with triggers and all the limitations they have.
Wouldn't it be possible to make the schema changes 'bigbang' but create views over the top of those tables that 'hide' the change?
I think you might find you are simply putting off the breakages to a later point in time: "We're going to deprecate this column one application at a time" - it might be my naivety but I can't see how that's ever going to work.
Surely, a worse mess can occur when different applications are doing things differently?
After sleeping on the problem, this seems to be the most generic/re-usable solution I could come up with within the SQL Syntax. It works fine even if both columns have a NOT NULL restraint, even if you don't reference the "other" column at all in your insert.
CREATE TRIGGER tr_Access_Sync
ON Access
INSTEAD OF INSERT
AS
BEGIN
/*-- Create a temporary table to modify because "inserted" is read-only */
/*-- "temp" is actually "#temp" but it throws off stackoverflow's syntax highlighting */
SELECT * INTO temp FROM inserted
/*-- If for whatever reason the secondary table has it's own identity column */
/*-- we need to get rid of it from our #temp table to do an Insert later with identities on */
ALTER TABLE temp DROP COLUMN oneToManyIdentity
UPDATE temp
SET
UserCode = ISNULL(UserCode, (SELECT UserCode FROM Users U WHERE U.UserID = temp.UserID)),
UserID = ISNULL(UserID, (SELECT UserID FROM Users U WHERE U.UserCode = temp.UserCode))
INSERT INTO Access SELECT * FROM temp
END

Resources