Does Microsoft SQL Server provides bitemporal historization? - sql-server

I want to use bitemporal historization in Microsoft SQL Server as in know it from e.g. DB2 (https://www.ibm.com/docs/en/db2-for-zos/12?topic=tables-creating-bitemporal).
There we can create a table via
CREATE TABLE policy_info
(policy_id CHAR(4) NOT NULL,
coverage INT NOT NULL,
bus_start DATE NOT NULL,
bus_end DATE NOT NULL,
sys_start TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW BEGIN,
sys_end TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW END,
create_id TIMESTAMP(12) GENERATED ALWAYS AS TRANSACTION START ID,
PERIOD BUSINESS_TIME(bus_start, bus_end),
PERIOD SYSTEM_TIME(sys_start, sys_end));
where
SYSTEM_TIME (works in MSQL_Server) refers to technical information in database changes and logs everything in a history-table and
BUSINESS_TIME (does it exist in MSQL server?) refers to the business-reated validity of data (e.g. the lastname of Employee with Employee_ID = 4711 was "Schmidt" from 3.6.21 until 5.7.21 and "Müller" from 6.7.21 until 23.1.22).
Does Microsoft SQL-Server provide a feature analogue to the BUSINESS_TIME in DB2?

Answer from comment: There is no bitemporal historization for SQL-Server.

Related

DJANGO using SQLSERVER how do I trigger the procedures?

I'm using SQLSERVER as a database in a DJANGO project in college and I need to trigger my triggers and procedures through DJANGO, I've been looking for a few days to do this and I can not, can anyone help me?
In case anyone else ends up on this page, this is how I managed to do this... my interpretation of the question is how to have actions on the database trigger functions on the database end. My database backend is PostgreSQL but SQL being a standard, queries for MySQL and others should be about the same).
The solution is relatively simple. Once you do your first
python manage.py makemigrations
python manage.py migrate
Head over to your database manager of choice and look up the SQL query that generated the table on which you wish to have your trigger.
For example, your public.auth.users table creation query might look like this:
CREATE TABLE public.auth_user
(
id integer NOT NULL DEFAULT nextval('auth_user_id_seq'::regclass),
password character varying(128) COLLATE pg_catalog."default" NOT NULL,
last_login timestamp with time zone,
is_superuser boolean NOT NULL,
username character varying(150) COLLATE pg_catalog."default" NOT NULL,
first_name character varying(30) COLLATE pg_catalog."default" NOT NULL,
last_name character varying(150) COLLATE pg_catalog."default" NOT NULL,
email character varying(254) COLLATE pg_catalog."default" NOT NULL,
is_staff boolean NOT NULL,
is_active boolean NOT NULL,
date_joined timestamp with time zone NOT NULL,
CONSTRAINT auth_user_pkey PRIMARY KEY (id),
CONSTRAINT auth_user_username_key UNIQUE (username)
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
Let's say you want to have a trigger to change the last_name of every new record to the value "Trump" (without quotation marks). The code to create your trigger function would look like this (N.B. the RAISE NOTICE lines just echo information to the SQL terminal for debugging. You can comment them out by adding a double dash in the front of them like --RAISE NOTICE 'id = % ', NEW.id;):
CREATE OR REPLACE FUNCTION trumpisizer() RETURNS trigger AS $$
BEGIN
RAISE NOTICE 'last_name = % ', NEW.last_name;
NEW.last_name = 'Trump';
RAISE NOTICE 'last_name = % ', NEW.last_name;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
Now you need to bind your function to your table. This SQL query to do so is this:
CREATE TRIGGER trumpist BEFORE INSERT ON auth_user FOR EACH ROW EXECUTE PROCEDURE trumpisizer();
Now load up your django app and create a new user. Every new user's last_name will be changed to the new value.
Slightly off topic advice, so please forgive me.
Why not use Flask. The power of Django is largely working within its framework. You build your database with models.py. You perform migrations through the framework and you leverage it for data operations with custom middleware or signals.
If you have a db that already does a lot of the heavy lifting for you, then it might be easier to work with less of a "batteries included" framework like Django and use flask or bottle. This would be especially true if it's for a college project where it doesn't require enterprise features or stability. Might be easier to hack and slash through some less well defined framework. If its towards the end of the semester, learning Django might be a tall order.
I'm just going to answer the question, but can tell you from experience, you're probably headed down some paths that are far off what would be considered best practices. You may want to do a little bit more digging if this is going to become a permanent project; if its a learning exercise, that's cool too!
On to the answer: Django is written in Python. You can trigger stored procedures in SQL Server from Python using pyodbc. To use SQL Server with Django's ORM, you'll want to use a Django engine, such as django-pyodbc-azure (install with pip install django-pyodbc-azure) which will also install pyodbc. If you're running Django on Linux or Mac, you'll also need a SQL Server compatible ODBC driver, such as the MS ODBC driver or FreeTDS (for up-to-date details on installing drivers on Linux: https://pyphilly.org/django-and-sql-server-2018-edition/)
Good luck.

Copy data from a SQL Server table into a historic table and add a timestamp of the time copied?

I'm trying to work out a specific way to copy all data from a particular table (let's call it opportunities) and copy it into a new table, with a timestamp of the date copied into the new table, for the sole purpose of generating historic data into a database hosted in Azure Data Warehousing.
What's the best way to do this? So far I've gone and created a duplicate table in the data warehouse, with an additional column called datecopied
The query I've started using is:
SELECT OppName, Oppvalue
INTO Hst_Opportunities
FROM dbo.opportunities
I am not really sure where to go from here!
SELECT INTO is not supported in Azure SQL Data Warehouse at this time. You should familiarise yourself with the CREATE TABLE AS or CTAS syntax, which is the equivalent in Azure DW.
If you want to fix the copy date, simply assign it to a variable prior to the CTAS, something like this:
DECLARE #copyDate DATETIME2 = CURRENT_TIMESTAMP
CREATE TABLE dbo.Hst_Opportunities
WITH
(
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = ROUND_ROBIN
)
AS
SELECT OppName, Oppvalue, #copyDate AS copyDate
FROM dbo.opportunities;
I should also mention that the use case for Azure DW is million and billions of rows with terabytes of data. It doesn't tend to do well at low volume, so consider if you need this product, a traditional SQL Server 2016 install, or Azure SQL Database.
You can write insert into select query like below which will work with SQL Server 2008 +, Azure SQL datawarehouse
INSERT INTO Hst_Opportunities
SELECT OppName, Oppvalue, DATEDIFF(SECOND,{d '1970-01-01'},current_timestamp)
FROM dbo.opportunities

Schema and call flow in Voltdb

what is the format of schema when we create a new table using Voltdb?
I'm a newbie. I have researched for a while and read the explanation in this https://docs.voltdb.com/UsingVoltDB/ChapDesignSchema.php
Please give me more detal about the schema format when I create a new table.
Another quesiton is What is the call flow of the system, since a request comes to the system until a response is create.
Which class/function does it go through in the system.
Since VoltDB is a SQL compliant database, you would create a new table in VoltDB just as you would create a new table in any other traditional relational database. For e.g.,
CREATE TABLE MY_TABLE (id INTEGER NOT NULL, name VARCHAR(10));
You can find all the SQL DDL statements that you can run on VoltDB here
1.Make file yourSchemaName.sql anywhere in the system. Suppose yourSchemaName.sql looks like this
CREATE TABLE Customer (
CustomerID INTEGER UNIQUE NOT NULL,
FirstName VARCHAR(15),
LastName VARCHAR (15),
PRIMARY KEY(CustomerID)
);
2.fire sqlcmd in CLI inside folder where you have installed the voltdB.
if you haven't set the path then you have to type /bin/sqlcmd.
After firing the command, a simple way to load schema in your voltdB database is by typing /path/to/yourSchemaName.sql; command inside the sqlcmd utility and the schema named yourSchemaName.sql will be imported inside the database.
VoltdB is relational database,so Now you can use all of sql database queries inside this database.

Sql Server ODBC Date Field - Optional feature not implemented

I have a SQL Server table which has fields of type Date in it. I am trying to update or insert a record into the table via Micosoft Access using ODBC. I get the error:
[ODBC SQL Server Driver]Optional feature not implemented
when I try and update or insert a record.
I have to use Date fields not DateTime fields in my table because I am using very old dates going back 2000 years.
Is there any way round this problem, which I assume is caused by the Date fields?
This is what the table looks like
CREATE TABLE [dbo].[Person](
[PersonId] [int] IDENTITY(1,1) NOT NULL,
[DOB] [date] NOT NULL,
[DOD] [date] NULL DEFAULT (NULL),
[Name] [nvarchar](100) NOT NULL)
You best bet is to dump the use of the "legacy" sql driver, and user the newer native 10 or 11 driver. The older driver will view date fields as text, but using the newer native 10/11 driver will see the column as a date column. This will require you to re-link your tables.
If you can't change your SQL Server version, an easier solution is to pass the date as an adVarChar, and then do a CAST(#param AS DATE) in your SQL stored procedure.
I've experienced the same problem today.
I use MsAccess 2010 for developlemt, and have MsSql2012 at back-end.
There was no problem on my computer,
but other clients that use the accde runtime version has experienced this trouble.
After several trials;
Issue resolved when replacing DATE type with SMALLDATETIME. please try this..?
Indeed I only needed the date part, not the time, but ok anyway!
[DOB] [date] NOT NULL,
[DOD] [date] NULL DEFAULT (NULL),
Hope this helps to you as well

SQL Azure raise 40197 error (level 20, state 4, code 9002)

I have a table in a SQL Azure DB (s1, 250Gb limit) with 47.000.000 records (total 3.5Gb). I tried to add a new calculated column, but after 1 hour of script execution, I get: The service has encountered an error processing your request. Please try again. Error code 9002 After several tries, I get the same result.
Script for simple table:
create table dbo.works (
work_id int not null identity(1,1) constraint PK_WORKS primary key,
client_id int null constraint FK_user_works_clients2 REFERENCES dbo.clients(client_id),
login_id int not null constraint FK_user_works_logins2 REFERENCES dbo.logins(login_id),
start_time datetime not null,
end_time datetime not null,
caption varchar(1000) null)
Script for alter:
alter table user_works add delta_secs as datediff(second, start_time, end_time) PERSISTED
Error message:
9002 sql server (local) - error growing transactions log file.
But in Azure I can not manage this param.
How can I change my structure in populated tables?
Azure SQL Database has a 2GB transaction size limit which you are running into. For schema changes like yours you can create a new table with the new schema and copy the data in batches into this new table.
That said the limit has been removed in the latest service version V12. You might want to consider upgrading to avoid having to implement a workaround.
Look at sys.database_files by connecting to the user database. If the log file current size reaches the max size then you hit this. At this point either you have to kill the active transactions or update to higher tiers (if this is not possible because of the amount of data you modifying in a single transaction).
You can also get the same by doing:
DBCC SQLPERF(LOGSPACE);
Couple ideas:
1) Try creating an empty column for delta_secs, then filling in the data separately. If this still results in txn log errors, try updating part of the data at a time with a WHERE clause.
2) Don't add a column. Instead, add a view with the delta_secs column as a calculated field instead. Since this is a derived field, this is probably a better approach anyway.
https://msdn.microsoft.com/en-us/library/ms187956.aspx

Resources