I'm using SQLSERVER as a database in a DJANGO project in college and I need to trigger my triggers and procedures through DJANGO, I've been looking for a few days to do this and I can not, can anyone help me?
In case anyone else ends up on this page, this is how I managed to do this... my interpretation of the question is how to have actions on the database trigger functions on the database end. My database backend is PostgreSQL but SQL being a standard, queries for MySQL and others should be about the same).
The solution is relatively simple. Once you do your first
python manage.py makemigrations
python manage.py migrate
Head over to your database manager of choice and look up the SQL query that generated the table on which you wish to have your trigger.
For example, your public.auth.users table creation query might look like this:
CREATE TABLE public.auth_user
(
id integer NOT NULL DEFAULT nextval('auth_user_id_seq'::regclass),
password character varying(128) COLLATE pg_catalog."default" NOT NULL,
last_login timestamp with time zone,
is_superuser boolean NOT NULL,
username character varying(150) COLLATE pg_catalog."default" NOT NULL,
first_name character varying(30) COLLATE pg_catalog."default" NOT NULL,
last_name character varying(150) COLLATE pg_catalog."default" NOT NULL,
email character varying(254) COLLATE pg_catalog."default" NOT NULL,
is_staff boolean NOT NULL,
is_active boolean NOT NULL,
date_joined timestamp with time zone NOT NULL,
CONSTRAINT auth_user_pkey PRIMARY KEY (id),
CONSTRAINT auth_user_username_key UNIQUE (username)
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
Let's say you want to have a trigger to change the last_name of every new record to the value "Trump" (without quotation marks). The code to create your trigger function would look like this (N.B. the RAISE NOTICE lines just echo information to the SQL terminal for debugging. You can comment them out by adding a double dash in the front of them like --RAISE NOTICE 'id = % ', NEW.id;):
CREATE OR REPLACE FUNCTION trumpisizer() RETURNS trigger AS $$
BEGIN
RAISE NOTICE 'last_name = % ', NEW.last_name;
NEW.last_name = 'Trump';
RAISE NOTICE 'last_name = % ', NEW.last_name;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
Now you need to bind your function to your table. This SQL query to do so is this:
CREATE TRIGGER trumpist BEFORE INSERT ON auth_user FOR EACH ROW EXECUTE PROCEDURE trumpisizer();
Now load up your django app and create a new user. Every new user's last_name will be changed to the new value.
Slightly off topic advice, so please forgive me.
Why not use Flask. The power of Django is largely working within its framework. You build your database with models.py. You perform migrations through the framework and you leverage it for data operations with custom middleware or signals.
If you have a db that already does a lot of the heavy lifting for you, then it might be easier to work with less of a "batteries included" framework like Django and use flask or bottle. This would be especially true if it's for a college project where it doesn't require enterprise features or stability. Might be easier to hack and slash through some less well defined framework. If its towards the end of the semester, learning Django might be a tall order.
I'm just going to answer the question, but can tell you from experience, you're probably headed down some paths that are far off what would be considered best practices. You may want to do a little bit more digging if this is going to become a permanent project; if its a learning exercise, that's cool too!
On to the answer: Django is written in Python. You can trigger stored procedures in SQL Server from Python using pyodbc. To use SQL Server with Django's ORM, you'll want to use a Django engine, such as django-pyodbc-azure (install with pip install django-pyodbc-azure) which will also install pyodbc. If you're running Django on Linux or Mac, you'll also need a SQL Server compatible ODBC driver, such as the MS ODBC driver or FreeTDS (for up-to-date details on installing drivers on Linux: https://pyphilly.org/django-and-sql-server-2018-edition/)
Good luck.
Related
We are trying to work with temporal tables in SQL Server 2016. We are developing the SQL scripts in SSDT 15.1.6 in Visual Studio 2017, but we are experiencing issues when trying to deploy the dacpac that is generated during the build.
Our dacpac is deployed using SqlPackage.exe, and we encounter this error when attempting to deploy the dacpac:
Creating [dbo].[TestHISTORY].[ix_TestHISTORY]...
An error occurred while the batch was being executed.
Updating database (Failed)
Could not deploy package.
Error SQL72014: .Net SqlClient Data Provider:
Msg 1913, Level 16, State 1, Line 1
The operation failed because an index or statistics with name 'ix_TestHISTORY' already exists on table 'dbo.TestHistory'.
Error SQL72045: Script execution error. The executed script:
CREATE CLUSTERED INDEX [ix_TestHISTORY]
ON [dbo].[TestHistory]([SysStart] ASC, [SysEnd] ASC);
When we create the temporal table in SSDT we have the following:
CREATE TABLE [dbo].[Test]
(
[Id] INT NOT NULL PRIMARY KEY,
[SysStart] DATETIME2 (7) GENERATED ALWAYS AS ROW START NOT NULL,
[SysEnd] DATETIME2 (7) GENERATED ALWAYS AS ROW END NOT NULL,
PERIOD FOR SYSTEM_TIME ([SysStart], [SysEnd])
)
WITH (SYSTEM_VERSIONING = ON(HISTORY_TABLE=[dbo].[TestHISTORY], DATA_CONSISTENCY_CHECK=ON))
As far as I can tell the issue is with the dacpac creation. After the project is built, the dacpac created looks like this:
CREATE TABLE [dbo].[test]
(
[Id] INT NOT NULL PRIMARY KEY CLUSTERED ([Id] ASC),
[SysStart] DATETIME2 (7) GENERATED ALWAYS AS ROW START NOT NULL,
[SysEnd] DATETIME2 (7) GENERATED ALWAYS AS ROW END NOT NULL,
PERIOD FOR SYSTEM_TIME ([SysStart], [SysEnd])
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE=[dbo].[testHISTORY], DATA_CONSISTENCY_CHECK=ON));
GO
CREATE TABLE [dbo].[testHISTORY]
(
[Id] INT NOT NULL,
[SysStart] DATETIME2 (7) NOT NULL,
[SysEnd] DATETIME2 (7) NOT NULL
);
GO
CREATE CLUSTERED INDEX [ix_testHISTORY]
ON [dbo].[testHISTORY]([SysEnd] ASC, [SysStart] ASC);
GO
I suspect because we are using a temporal table with a default history table we can't have the dacpac create those extra creation statements. Since this is effectively causing SQL Server to try to create those items twice, leading to the above error.
Does anyone know what we might be missing? Or if you are deploying temporal tables using a dacpac, is your only option to use user-defined history tables?
We've had a number of issues between temporal tables and DACPAC's. A few tips that will go a long way:
Explicitly declare history tables - This goes way further than one would think. When adding/removing columns, you can define a default on history tables, allowing you to bypass a number of issues that arise when data is already in the tables.
Add defaults to EVERYTHING - This cannot be overstated. Defaults are the best friend of a DACPAC.
Review the scripts - It's nice to think of DACFx as hands off, but it's not. Review the scripts once in a while, and you'll gain a ton of insight (it appears you already are!)
Explicitly name your indices - DACFx sometimes uses temporary names for indices/tables/other stuff. Consistency is king, right?
Review ALL publish profile options - Sometimes, there are settings you didn't think of in the profile. It took us a lot of manual intervention before we realized there was a setting for transactional scripts in the publish profile.
Also look into who is turning your DACPAC into a script. VS uses SqlPackage.exe, but I sometimes get different results from the DACFx DLLs. It's likely a config thing that's different between the two, but it's tough to find out. Just try both, and see if one works better.
Best of luck! Hope this helps!
One potential hacky work around you can try is pre-deployment scripts;
https://msdn.microsoft.com/en-us/library/jj889461(v=vs.103).aspx
They are executed between 'Generation of deployment script' & 'Execution of the deployment script'. So if you can't avoid collision on index name, you can probably rename existing index before upgrade, This is hacky and i am assuming you are deploying/updating schema of a live DB and not creating a new DB
BTW, Where are the column names 'ValidFrom' & 'ValidTo' found in error message coming from?, If it is auto generated then it should be 'SysEnd' & 'SysStart'
Looking for a workaround for:
Error: SQL71609: System-versioned current and history tables do not have matching schemes. Mismatched column: 'XXXX'.
When trying to use SQL 2016 System-Versioned (Temporal) tables in SSDT for Visual Studio 2015.
I've defined a basic table:
CREATE TABLE [dbo].[Example] (
[ExampleId] INT NOT NULL IDENTITY(1,1) PRIMARY KEY,
[ExampleColumn] VARCHAR(50) NOT NULL,
[SysStartTime] datetime2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
[SysEndTime] datetime2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime)
)
WITH (SYSTEM_VERSIONING=ON(HISTORY_TABLE=[history].[Example]))
GO
(Assuming the [history] schema is properly created in SSDT). This builds fine the first time.
If I later make a change:
CREATE TABLE [dbo].[Example] (
[ExampleId] INT NOT NULL IDENTITY(1,1) PRIMARY KEY,
[ExampleColumn] CHAR(50) NOT NULL, -- NOTE: Changed datatype
[SysStartTime] datetime2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
[SysEndTime] datetime2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
PERIOD FOR SYSTEM_TIME (SysStartTime,SysEndTime)
)
WITH (SYSTEM_VERSIONING=ON(HISTORY_TABLE=[history].[Example]))
GO
Then the build fails with the error message above. Any change to the data type, length, precision, or scale will result in this error. (Including changing from VARCHAR to CHAR and VARCHAR(50) to VARCHAR(51); changing NOT NULL to NULL does not produce the error.) Doing a Clean does not fix things.
My current workaround is to make sure I have the latest version checked in to source control, then open the SQL Server Object Explorer, expand the Projects - XXXX folder and navigate to the affected table, then delete it. Then I have to restore the code (which SSDT deletes) from source control. This procedure is tedious, dangerous, and not what I want to be doing.
Has anyone found a way to fix this? Is it a bug?
I'm using Microsoft Visual Studio Professional 2015, Version 14.0.25431.01 Update 3 with SQL Server Data Tools 14.0.61021.0.
I can reproduce this problem. We (the SQL Server tools team) will work to get this fixed in a future version of SSDT. In the meantime, I believe you can work around this by explicitly defining the history table (i.e. add the history table with its desired schema to the project), and then manually keep the schema of the current and history table in sync.
If you encounter problems with explicitly defining the history table, try closing Visual Studio, deleting the DBMDL file in the project root, and then re-opening the project.
We just experienced this issue. We found a workaround by commenting out the system versioning elements of the table (effectively making it a normal table), building the project with the schema change we needed (which succeeds), and then putting the system versioning lines back in place (which also succeeds).
Just in case someone faced the same issue:
The fix is to go to [YourDatabaseProject]/bin/Debug folder and clear it and then build without removing anything.
Hope this helps!
what is the format of schema when we create a new table using Voltdb?
I'm a newbie. I have researched for a while and read the explanation in this https://docs.voltdb.com/UsingVoltDB/ChapDesignSchema.php
Please give me more detal about the schema format when I create a new table.
Another quesiton is What is the call flow of the system, since a request comes to the system until a response is create.
Which class/function does it go through in the system.
Since VoltDB is a SQL compliant database, you would create a new table in VoltDB just as you would create a new table in any other traditional relational database. For e.g.,
CREATE TABLE MY_TABLE (id INTEGER NOT NULL, name VARCHAR(10));
You can find all the SQL DDL statements that you can run on VoltDB here
1.Make file yourSchemaName.sql anywhere in the system. Suppose yourSchemaName.sql looks like this
CREATE TABLE Customer (
CustomerID INTEGER UNIQUE NOT NULL,
FirstName VARCHAR(15),
LastName VARCHAR (15),
PRIMARY KEY(CustomerID)
);
2.fire sqlcmd in CLI inside folder where you have installed the voltdB.
if you haven't set the path then you have to type /bin/sqlcmd.
After firing the command, a simple way to load schema in your voltdB database is by typing /path/to/yourSchemaName.sql; command inside the sqlcmd utility and the schema named yourSchemaName.sql will be imported inside the database.
VoltdB is relational database,so Now you can use all of sql database queries inside this database.
I think that the ability of some reflective managed environments (e.g. .NET) to add custom metadata to code entities in the form of attributes is very powerful. Is there any mechanism for doing something similar for databases?
Databases obviously already have a fair amount of metadata available; for example, you can get a list of all tables, columns, and foreign key references, which is enough to put together a schema diagram. However, I could imagine plenty of uses for something more generic, such as something along the lines of this imaginary fusion of C# and DDL:
[Obsolete("Being replaced by the ClientInteraction table")]
create table CustomerOrder (
[SurrogateKey]
MyTableId int identity(1,1) primary key
[NaturalKey]
[ForeignKey("Customer", "CustomerId")] /* Actual constraint left out for speed */
,CustomerId int not null
[NaturalKey]
[ConsiderAsNull(0)]
[ConsiderAsNull(-1)]
,OrderId int not null
[Conditional("DEBUG")]
,InsertDateTime datetime
)
The example is a little contrived but hopefully makes my question clearer. I think that the ability to reflect over this kind of metadata could make many tasks much more easily automated. Is there anything like this out there? I'm working with SQL Server but if there's something for another DBMS then I'd still be interested in hearing about it.
in SQL Server 2005 and up you can use sp_addextendedproperty and fn_listextendedproperty (as well as the SSMS gui) to set and view descriptions on various database items. here is an example of how to set and view a description on a table column:
--code generated by SSMS to set a description on a table column
DECLARE #v sql_variant
SET #v = N'your description text goes here'
EXECUTE sp_addextendedproperty N'MS_Description', #v, N'SCHEMA', N'your_schema_name_here', N'TABLE', N'your_table_name_here', N'COLUMN', N'your_column_name_here'
--code to list descriptions for all columns in a table
SELECT * from fn_listextendedproperty (NULL, 'schema', 'your_schema_name_here', 'table', 'your_table_name_here', 'column', default);
You can pull out almost anything to do with an object out of SQL Server, if you know where to look. If you need to supply more "attributes" you are extending the problem domain with more "concepts" and this goes against the KISS principle -- also a relational database is obviously perfect capable of representing any relations to do with your data and the example you have shown. I cannot think of a reason why you would want to add user supplied meta-data to a table.
If however, you have the need, stick to adding the additional attributes in an ORM. Or create a separate table that describes the meta-data for your specific problem. You also have the ability to link in Table Defined Functions which you are free to write in C# -- you can therefore represent your attributes in C# -- again: I think this to be overkill.
I have 2 Oracle questions
How do I translate this SQL Server statement to work on Oracle?
Create table MyCount(Line int identity(1,1))
What is the equivalent of SQL Servers Image type for storing pictures in an Orace database?
You don't need to use triggers for this if you manage the inserts:
CREATE SEQUENCE seq;
CREATE TABLE mycount
(
line NUMBER(10,0)
);
Then, to insert a value:
INSERT INTO mycount(line) VALUES (seq.nextval);
For images, you can use BLOBs to store any binary data or BFILE to manage more or less as a BLOB but the data is stored on file system, for instance a jpg file.
References:
Create Sequence reference.
Create table reference.
Oracle® Database Application Developer's Guide - Large Objects.
1: You'll have to create a sequence and a trigger
CREATE SEQUENCE MyCountIdSeq;
CREATE TABLE MyCount (
Line INTEGER NOT NULL,
...
);
CREATE TRIGGER MyCountInsTrg BEFORE INSERT ON MyCount FOR EACH ROW AS
BEGIN
SELECT MyCountIdSeq.NEXTVAL INTO :new.Line
END;
/
2: BLOB.
Our tools can answer these questions for you. I'm talking about Oracle SQL Developer.
First - it has a Create Table wizard - and 12/18c Database supports native implementation of Identity columns.
And your new table DDL
CREATE TABLE MYCOUNT
(
LINE INT GENERATED ALWAYS AS IDENTITY NOT NULL
);
Also, we have a Translator - it can take SQL Server bits and turn them into equivalent Oracle bits. There's a full-blown migration wizard which will capture and convert your entire data model.
But for one-offs, you can use your Scratchpad. It's available under the Tools, Migrations menu.
Here it is taking your code and giving you something that would work in any Oracle Database.
Definitely use the identity feature in 12/18c if you're on that version of Oracle. Fewer db objects to maintain.