How to convert sql_variant so it can be considered deterministic - sql-server

I am trying to create a persisted computed column in a SYSTEM_VERSIONING table dbo.Users as follows:
ALTER TABLE dbo.Users
ADD SessionId AS usr.GetSession() PERSISTED
CONSTRAINT FK_dboUsers_IdSession
FOREIGN KEY REFERENCES dbo.Sessions(IdSession)
Where usr.GetSession() is just retrieving the value stored as BIGINT in SESSION_CONTEXT('IdSession') and converting it to again to BIGINT.
CREATE OR ALTER FUNCTION usr.GetSession()
RETURNS BIGINT WITH SCHEMABINDING
AS
BEGIN
RETURN CONVERT(BIGINT, SESSION_CONTEXT(N'IdSession'))
END
But getting the following error:
Computed column 'SessionId' in table 'Users' cannot be persisted because the column is non-deterministic.
It is obviously because:
SELECT OBJECTPROPERTY(OBJECT_ID('usr.GetSession'), 'IsDeterministic') AS IsDeterministic;
Is returning 0
A little bit of searching found this about Deterministic and nondeterministic functions
CONVERT
Deterministic unless one of these conditions exists:
Source type is sql_variant.
Target type is sql_variant and its source type is nondeterministic.
So, I am understanding that there is no way to make my computed column persisted with a user defined scalar function as sql_variant cannot be handled as deterministic value.
Or there can be some walk around to solve my problem? Or some other solution? Any idea?

No, there is no workaround. You cannot do anything with sql_variant unless you convert it (even implicitly), and as you mention, that is not deterministic.
Be that as it may, it seems you are going down the wrong road anyway.
A computed column is the wrong thing here, as in this case it would change every time it was read, whereas it seems you want it changed every time the row is inserted.
Instead you need a DEFAULT
ALTER TABLE dbo.Users
ADD SessionId bigint DEFAULT (usr.GetSession())

Related

Can SQL Server timestamp type column be NULL?

I read the document on "CREATE TABLE" at https://learn.microsoft.com/en-us/sql/t-sql/statements/create-table-transact-sql?view=sql-server-ver15
It said
timestamp data types must be NOT NULL.
However, when I create a table, I can create a field with timestamp type and make it nullable. So, what is the problem?
Update
When using the following query:
USE MyDB6;
CREATE TABLE MyTable (Col1 timestamp NULL);
I expect an error saying the column Col1 cannot be NULL but nothing happens.
After creating the table, I run the following query:
USE MyDB6
SELECT COLUMNPROPERTY(OBJECT_ID('MyTable', 'U'), 'Col1', 'AllowsNull');
I expect the result is 0, but actually it is 1.
So, my question is, though the document has said "timestamp data types must be NOT NULL.", and in the real cases, this data type will also not be NULL, why the create table query does not prevent me from setting it to nullable and the system still save the column as nullable?
Like marc_s said in their comment, this datatype is handled internally and will never be null. Try the following:
declare #test timestamp = null -- ROWVERSION would be less confusing
select #test
It does not return NULL
As to why you're allowed to set it to nullable; what would be won by creating this deviation from the standard? You cannot INSERT NULL into a TIMESTAMP/ROWVERSION column, you cannot UPDATE it at all. I imagine it is quite a lot of trouble to alter the CREATE syntax to make certain datatype not nullable; more trouble than its worth.

Alter Column: option to specify conversion function?

I have a column of type float that contains phone numbers - I'm aware that this is bad, so I want to convert the column from float to nvarchar(max), converting the data appropriately so as not to lose data.
The conversion can apparently be handled correctly using the STR function (suggested here), but I'm not sure how to go about changing the column type and performing the conversion without creating a temporary column. I don't want to use a temporary column because we are doing this automatically a bunch of times in future and don't want to encounter performance impact from page splits (suggested here)
In Postgres you can add a "USING" option to your ALTER COLUMN statement that specifies how to convert the existing data. I can't find anything like this for TSQL. Is there a way I can do this in place?
Postgres example:
...ALTER COLUMN <column> TYPE <type> USING <func>(<column>);
Rather than use a temporary column in your table, use a (temporary) column in a temporary table. In short:
Create temp table with PK of your table + column you want to change (in the correct data type, of course)
select data into temp table using your conversion method
Change data type in actual table
Update actual table from temp table values
If the table is large, I'd suggest doing this in batches. Of course, if the table isn't large, worrying about page splits is premature optimization since doing a complete rebuild of the table and its indexes after the conversion would be cheap. Another question is: why nvarchar(max)? The data is phone numbers. Last time I checked, phone numbers were fairly short (certainly less than the 2 Gb that nvarchar(max) can hold) and non-unicode. Do some domain modeling to figure out the appropriate data size and you'll thank me later. Lastly, why would you do this "automatically a bunch of times in future"? Why not have the correct data type and insert the right values?
In sqlSever:
CREATE TABLE dbo.Employee
(
EmployeeID INT IDENTITY (1,1) NOT NULL
,FirstName VARCHAR(50) NULL
,MiddleName VARCHAR(50) NULL
,LastName VARCHAR(50) NULL
,DateHired datetime NOT NULL
)
-- Change the datatype to support 100 characters and make NOT NULL
ALTER TABLE dbo.Employee
ALTER COLUMN FirstName VARCHAR(100) NOT NULL
-- Change datatype and allow NULLs for DateHired
ALTER TABLE dbo.Employee
ALTER COLUMN DateHired SMALLDATETIME NULL
-- Set SPARSE columns for Middle Name (sql server 2008 only)
ALTER TABLE dbo.Employee
ALTER COLUMN MiddleName VARCHAR(100) SPARSE NULL
http://sqlserverplanet.com/ddl/alter-table-alter-column

Is it possible to alter a SQL Server table column datatype from bigint to varchar after it has been populated?

I have a SQL Server 2008 table which contains an external user reference currently stored as a bigint - the userid from the external table. I want to extend this to allow email address, open ID etc to be used as the external identifier. Is it possible to alter the column datatype from bigint to varchar without affecting any of the existing data?
Yes, that should be possible, no problem - as long as you make your VARCHAR field big enough to hold you BIGINT values :-)
You'd have to use something like this T-SQL:
ALTER TABLE dbo.YourTable
ALTER COLUMN YourColumnName VARCHAR(50) -- or whatever you want
and that should be it! Since all BIGINT values can be converted into a string, that command should work just fine and without any danger of losing data.

Transact-SQL / Check if a name already exists

Simple question here.
Context: A Transact-SQL table with an int primary key, and a name that also must be unqiue (even though it's not a primary key). Let's say:
TableID INT,
TableName NVARCHAR(50)
I'm adding a new rows to this able through a stored procedure (and, thus, specifying TableName with a parameter).
Question: What's the best/simplest way to verify if the provided TableName parameter already exist in the table, and to prevent the add of a new row if it's the case?
Is possible to do this directly within my AddNewRow stored procedure?
If you're using SQL Server 2008 then you could use a MERGE statement in your sproc:
MERGE INTO YourTable AS target
USING (VALUES (#tableName)) AS source (TableName)
ON target.TableName = source.TableName
WHEN NOT MATCHED THEN
INSERT (TableName) VALUES (TableName)
You should still ensure that the TableName column has a UNIQUE constraint.
To add a unique constraint on TableName and handle the error if you try and insert a duplicate.
This avoids any issues with concurrent transactions inserting a duplicate in between you reading that it is not there and trying your insert.
See this related question.
I would prefer using Unique Constraint on the column and then explicitly checking on for its existance.
Handling an exception will result into Identity increment if present,
Secondly exception can be avoided by checking for existence before insertion which other wise is more expensive operation.
IF EXISTS (SELECT TOP(1) ColName FROM MyTable WHERE ColName=#myParameter)
If using Unique constraint you can also apply Unique Nonclustured index resulting into fast retrieval alongwith.

SQL Server 2000 constraint involving column on different table

I would like a constraint on a SQL Server 2000 table column that is sort of a combination of a foreign key and a check constraint. The value of my column must exist in the other table, but I am only concerned with values in the other table where one of its columns equal a specified value. The simplified tables are:
import_table:
part_number varchar(30)
quantity int
inventory_master:
part_number varchar(30)
type char(1)
So I want to ensure the part_number exists in inventory_master, but only if the type is 'C'. Is this possible? Thanks.
You could use an INSTEAD OF INSERT trigger to emulate that behaviour.
Check value existence when an insert is about to occur.
You could use a trigger on INSERT and UPDATE statements which would ensure the integrity
CREATE TRIGGER syntax: http://msdn.microsoft.com/en-us/library/ms189799.aspx

Resources