Why is this type conversion rejected as non-deterministic for a PERSISTED computed column in return tables of user-defined functions (UDF) in SQL Server?
CREATE FUNCTION MyTimeIntervalFunction(#Param1 INT)
RETURNS #MyTimeInterval TABLE
(
StartUtc DATETIME NOT NULL PRIMARY KEY
,EndUtc DATETIME NOT NULL
,DateUtc AS CONVERT(DATE, StartUtc) PERSISTED
)
AS BEGIN
--do stuff
RETURN
END
Note this is not converting to or from a string representation, so I don't know why it doesn't work because globalization/region stuff should be irrelevant.
This works outside of a UDF (including stored procedures):
DECLARE #MyTimeInterval TABLE
(
StartUtc DATETIME NOT NULL PRIMARY KEY
,EndUtc DATETIME NOT NULL
,DateUtc AS CONVERT(DATE, StartUtc) PERSISTED
)
INSERT INTO #MyTimeInterval(StartUtc, EndUtc)
VALUES ('2018-01-01', '2018-01-02')
SELECT * FROM #MyTimeInterval
It seems that adding WITH SCHEMABINDING to the UDF definition shuts it up, but I don't understand why, because it looks like that only marks the function output as deterministic based on input parameters. And I have to do other non-deterministic stuff in my function, so it is not a candidate workaround.
Wonky string manipulation could also be a workaround, but is not preferable. Style 126 for ISO-8601 on CONVERT is still non-deterministic according to SQL Server. It seems the only option is to abandon use of persisted computed columns?
As mentioned at the beginning of this somewhat related answer, not specifying WITH SCHEMABINDING means SQL Server skips checks on such things as determinism and data access.
Since PERSISTED in a computer column requires the "computed column expression" to be deterministic and SQL Server skips any checks on whether or not it actually is deterministic, it won't be allowed. The same error would occur even if you had something as simple as i AS 1 PERSISTED.
(This is unrelated to whether everything in the function itself is deterministic.)
All that said, using PERSISTED in a TVF doesn't actually add anything to the function, as far as I know.
I got the following message on MS SQL Server (I'm translating from German):
"Table 'VF_Fact', column ORGUNIT_CD, Value: 1185. The attribute is
ORGUNIT_CD. Row was dropped because attribute key was not found.
Attribute: ORGUNIT_CD in the dimension 'Organization' from database
'Dashboard', Cube 'Box Cube'..."
I checked the fact table 'VF_Fact' and the column ORGUNIT_CD - there I was able to found the value '1185'. The column ORGUNIT_CD is defined as follows in the view:
CAST( COALESCE( emp.ORGUNIT_CD, 99999999 ) AS char(8)) AS ORGUNIT_CD,
In addition the view retrieves the column from L_Employee_SAP TABLE, where ORGUNIT_CD is defined as follows:
[ORGUNIT_CD] [char](8) NOT NULL,
AND the value I find here is not '1185' but '00001185'.
The Fact table 'VF_Fact' is connected with the table L_ORG in which the column ORGUNIT_CD is defined as follows:
[ORGUNIT_CD] [char](8) NOT NULL,
This table hast the following value in the ORGUNIT_CD column: '00001185'.
Can anyone please explain, why am i getting this error, and how to remove it?
From this answer:
COALESCE:
Return Types
Returns the data type of expression with the highest data type precedence. If all expressions are nonnullable, the result is typed
as nonnullable.
(Emphasis added). int had a higher precedence than varchar, so
the return type of your COALESCE must be of type int. And obviously,
your varchar value cannot be so converted.
As another answer noted, ISNULL() behaves differently: rather than return the data type with the highest precedence, it returns the data type of the first value (thus, #Aleem's answer would solve your issue). A more detailed explanation can be found here under the section "Data Type of Expression."
In your specific case, I'd actually recommend that you encase the alternative string in single quotes, thus tipping SQL Server off to the fact that you intend this to be a character field. This means your expression would be one of the following:
CAST (ISNULL( emp.ORGUNIT_CD, '99999999' ) as char(8))
CAST (COALESCE( emp.ORGUNIT_CD, '99999999' ) AS char(8))
The advantage of using quotes in this situation? If you (or another developer) comes back to this down the line and tries to change it to COALESCE() or do any other type of modification, it's still going to work without breaking anything, because you told SQL Server what data type you want to use in the string itself. Depending on what else you're trying to do, you might even be able to remove the CAST() statement entirely.
COALESCE( emp.ORGUNIT_CD, '99999999' )
COALESCE function is dropping the leading zeroes. If you are checking for nulls you can do this and it will keep the zeroes.
CAST (ISNULL( emp.ORGUNIT_CD, 99999999 ) as char(8))
I fetch a result set with an execute SQL task. It has only one column NullTime varchar. It has three rows, first one is NULL. I want to simply iterate and display the value of these rows. If I do it only by C# script, then there is no problem. The NULL is displayed as a blank. The same thing can also be done with a foreach loop.
How to do it with foreach - use that loop to read each row and set the value of each row to SSIS string User::STR_WORD. Then, simply display User::STR_WORD with a C# script task.
In the pure C# method, I can even assign the blank value (actually a NULL) to the SSIS string. But with foreach loop method, I get an error because of the NULL value.
The error is -
Error: The type of the value being assigned to variable "User::STR_WORD" differs from the current variable type. Variables may not change type during execution. Variable types are strict, except for variables of type Object.
How do I fix this error ? Is script the only alternative to what seems to be a flawed for
loop ?
A workaround for this is the Coalesce function which will convert NULLs to the value you specify.
SELECT
COALESCE([YourColumn], 'EnterValueHere') AS [YourColumn]
FROM YourTable
So, this will replace a null with the value you want to use instead. It will
prevent the weird FOR loop from suddenly crashing.
Create a temporary table to see this working -
create table ##tester
(names varchar(25))
insert into ##tester
values(Null)
insert into ##tester
values('Not a null value')
select names from ##tester
select coalesce(names, 'Null has been obliterated!') from ##tester
In Sql Server I am using an XML type column to store a message. I do not want to store duplicate messages.
I only will have a few messages per user. I am currently querying the table for these messages, converting the XML to string in my C# code. I then compare the strings with what I am about to insert.
Unfortunately, Sql Server pretty-prints the data in the XML typed fields. What you store into the database is not necessarily exactly the same string as what you get back out later. It is functionally equivalent, but may have white space removed, etc.
Is there an efficient way to compare an XML string that I am considering inserting with those that are already in the database? As an aside, if I detect a duplicate I need to delete the older message then insert the replacement.
0 - Add a hash column to your table
1 - when you receive a new message, convert the whole XML to uppercase, remove all blanks and returns/linefeed, then compute the hash value of the normalized string.
2 - check if you already have a row with the resulting hash code in it.
If yes, this is duplicated, treat it
accordingly
If not, store the original XML along with the hash in a new row
I'm not 100% sure on your exact implementation but here is something I played around with. The idea being a stored procedure would do the inserting. Inserting into the messages table does a basic check on existing messages (SQL 2008 syntax):
declare #messages table (msg xml)
insert into #messages values
('<message>You like oranges</message>')
,('<message>You like apples</message>')
declare #newMessage xml = '<message>You like apples</message>'
insert into #messages (msg)
select #newMessage
where #newMessage.value('(message)[1]', 'nvarchar(50)') not in (
select msg.value('(message)[1]', 'nvarchar(50)')
from #messages
)
One solution is to stop using the XML typed field. Store the XML string into a varchar typed field.
I don't really like this solution, but I don't really like p.marino's solution either. It doesn't seem right to store a hash of something that is already in the row in the table.
What if you use OPENXML on each row in the table and query the actual XML information for key nodes and/or key attributes? But then you need to do it row by row, I don't think OPENXML works with a whole set of table rows.
I have to write a component that re-creates SQL Server tables (structure and data) in an Oracle database. This component also has to take new data entered into the Oracle database and copy it back into SQL Server.
Translating the data types from SQL Server to Oracle is not a problem. However, a critical difference between Oracle and SQL Server is causing a major headache. SQL Server considers a blank string ("") to be different from a NULL value, so a char column can be defined as NOT NULL and yet still include blank strings in the data.
Oracle considers a blank string to be the same as a NULL value, so if a char column is defined as NOT NULL, you cannot insert a blank string. This is causing my component to break whenever a NOT NULL char column contains a blank string in the original SQL Server data.
So far my solution has been to not use NOT NULL in any of my mirror Oracle table definitions, but I need a more robust solution. This has to be a code solution, so the answer can't be "use so-and-so's SQL2Oracle product".
How would you solve this problem?
Edit: here is the only solution I've come up with so far, and it may help to illustrate the problem. Because Oracle doesn't allow "" in a NOT NULL column, my component could intercept any such value coming from SQL Server and replace it with "#" (just for example).
When I add a new record to my Oracle table, my code has to write "#" if I really want to insert a "", and when my code copies the new row back to SQL Server, it has to intercept the "#" and instead write "".
I'm hoping there's a more elegant way.
Edit 2: Is it possible that there's a simpler solution, like some setting in Oracle that gets it to treat blank strings the same as all the other major database? And would this setting also be available in Oracle Lite?
I don't see an easy solution for this.
Maybe you can store your values as one or more blanks -> ' ', which aren't NULLS in Oracle, or keep track of this special case through extra fields/tables, and an adapter layer.
My typical solution would be to add a constraint in SQL Server forcing all string values in the affected columns to have a length greater than 0:
CREATE TABLE Example (StringColumn VARCHAR(10) NOT NULL)
ALTER TABLE Example
ADD CONSTRAINT CK_Example_StringColumn CHECK (LEN(StringColumn) > 0)
However, as you have stated, you have no control over the SQL Database. As such you really have four choices (as I see it):
Treat empty string values as invalid, skip those records, alert an operator and log the records in some manner that makes it easy to manually correct / re-enter.
Convert empty string values to spaces.
Convert empty string values to a code (i.e. "LEGACY" or "EMPTY").
Rollback transfers that encounter empty string values in these columns, then put pressure on the SQL Server database owner to correct their data.
Number four would be my preference, but isn't always possible. The action you take will really depend on what the oracle users need. Ultimately, if nothing can be done about the SQL database, I would explain the issue to the oracle business system owners, explain the options and consequences and make them make the decision :)
NOTE: I believe in this case SQL Server actually exhibits the "correct" behaviour.
Do you have to permit empty strings in the SQL Server system? If you can add a constraint to the SQL Server system that disallows empty strings, that is probably the easiest solution.
Its nasty and could have unexpected side effects.. but you could just insert "chr(0)" rather than ''.
drop table x
drop table x succeeded.
create table x ( id number, my_varchar varchar2(10))
create table succeeded.
insert into x values (1, chr(0))
1 rows inserted
insert into x values (2, null)
1 rows inserted
select id,length(my_varchar) from x
ID LENGTH(MY_VARCHAR)
---------------------- ----------------------
1 1
2
2 rows selected
select * from x where my_varchar is not null
ID MY_VARCHAR
---------------------- ----------
1
NOT NULL is a database constraint used to stop putting invalid data into your database. This is not serving any purpose in your Oracle database and so I would not have it.
I think you should just continue to allow NULLS in any Oracle column that mirrors a SqlServer column that is known to contain empty strings.
If there is a logical difference in the SqlServer database between NULL and empty string, then you would need something extra to model this difference in Oracle.
I'd go with an additional column on the oracle side. Have your column allow nulls and have a second column that identifies whether the SQL-Server side should get a null-value or empty-string for the row.
For those that think a Null and an empty string should be considered the same. A null has a different meaning from an empty string. It captures the difference between 'undefined' and 'known to be blank'. As an example a record may have been automatically created, but never validated by user input, and thus receive a 'null' in the expectation that when a user validates it, it will be set to be empty. Practically we may not want to trigger logic on a null but may want to on an empty string. This is analogous to the case for a 3 state checkbox of Yes/No/Undefined.
Both SQL and Oracle have not got it entirely correct. A blank should not satisfy a 'not null' constraint, and there is a need for an empty string to be treated differently than a null is treated.
If you are migrating data you might have to substitute a space for an empty string. Not very elegant, but workable. This is a nasty "feature" of Oracle.
I've written an explanation on how Oracle handles null values on my blog a while ago. Check it here: http://www.psinke.nl/blog/hello-world/ and let me know if you have any more questions.
If you have data from a source with empty values and you must convert to an Oracle database where columns are NOT NULL, there are 2 things you can do:
remove the not null constraint from the Oracle column
Check for each individual column if it's acceptable to place a ' ' or 0 or dummy date in the column in order to be able to save your data.
Well, main point I'd consider is absence of tasks when some field can be null, the same field can be empty string and business logic requires to distinguish these values. So I'd make this logic:
check MSSQL if column has NOT NULL constraint
check MSSQL if column has CHECK(column <> '') or similar constraint
If both are true, make Oracle column NOT NULL. If any one is true, make Oracle column NULL. If none is true, raise INVALID DESIGN exception (or maybe ignore it, if it's acceptable by this application).
When sending data from MSSQL to Oracle, just do nothing special, all data would be transferred right. When retrieving data to MSSQL, any not-null data should be sent as is. For null strings you should decide whether it should be inserted as null or as empty string. To do this you should check table design again (or remember previous result) and see if it has NOT NULL constraint. If has - use empty string, if has not - use NULL. Simple and clever.
Sometimes, if you work with unknown and unpredictable application, you cannot check for existence of {not empty string} constraint because of various forms of it. If so, you can either use simplified logic (make Oracle columns always nullable) or check whether you can insert empty string into MSSQL table without error.
Although, for the most part, I agree with most of the other responses (not going to get into an argument about any I disagree with - not the place for that :) )
I do notice that OP mentioned the following:
"Oracle considers a blank string to be the same as a NULL value, so if a char column is defined as NOT NULL, you cannot insert a blank string."
Specifically calling out CHAR, and not VARCHAR2.
Hence, talking about an "empty string" of length 0 (ie '' ) is moot.
If he's declared the CHAR as, for example, CHAR(5), then just add a space to the empty string coming in, Oracle's going to pad it anyway. You'll end up with a 5 space string.
Now, if OP meant VARCHAR2, well yeah, that's a whole other beast, and yeah, the difference between empty string and NULL becomes relevant.
SQL> drop table junk;
Table dropped.
SQL>
SQL> create table junk ( c1 char(5) not null );
Table created.
SQL>
SQL> insert into junk values ( 'hi' );
1 row created.
SQL>
SQL> insert into junk values ( ' ' );
1 row created.
SQL>
SQL> insert into junk values ( '' );
insert into junk values ( '' )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("GREGS"."JUNK"."C1")
SQL>
SQL> insert into junk values ( rpad('', 5, ' ') );
insert into junk values ( rpad('', 5, ' ') )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("GREGS"."JUNK"."C1")
SQL>
SQL> declare
2 lv_in varchar2(5) := '';
3 begin
4 insert into junk values ( rpad(lv_in||' ', 5) );
5 end;
6 /
PL/SQL procedure successfully completed.
SQL>