Handling column names with spaces in SQL Server - sql-server

I have created a SQL Query to update various columns, where the column names contain spaces. It works if I run it manually as a query:
UPDATE dbo.Survey
SET PhotoPathQ1='(null)'
WHERE "Q1 Photo Taken"='0'
UPDATE dbo.Survey
SET PhotoPathQ2='(null)'
WHERE "Q2 Photo Taken"='0'
UPDATE dbo.Survey
SET PhotoPathQ3='(null)'
WHERE "Q3 Photo Taken"='0'
... and further similar updates
However if I try to automate this using SQL Server Agent as a Transact-SQL script (T-SQL) it does not actually do anything to my table, the job says that it has run successfully but the data has not been updated.
Any help would be appreciated.
Am I missing something obvious with this?

Looks like an error in your syntax, try this as an example:
UPDATE dbo.Survey
SET PhotoPathQ1 = null
WHERE [Q1 Photo Taken] = 0
This assumes that the field PhotoPathQ1 is nullable and you actually want to insert a true null value in to it rather than a string '(null)'.
It also assumes that [Q1 Photo Taken] is a bit or int field, although SQL Server will handle the conversion happily if you have it in quotes. If it's a string data type, then you should leave the quotes there.
You should use square brackets on field names that contain spaces instead of double quotes:
[Q1 Photo Taken]

Related

Snowflake Error: SQL compilation error: error line 3 at position 6 invalid identifier 'INTERNAL_ID'

I am running into an issue when I try to query a view. I can successfully pull the data without a WHERE clause, but when I add a WHERE clause it fails.
Good Query:
SELECT *
FROM V_WMS_STG_BI_DMLABOR
Failed Query:
SELECT *
FROM V_WMS_STG_BI_DMLABOR
WHERE Internal_ID = 5587640
I tried to add single and/or double quotes to the Internal_ID and value without any success. I added single quotes to the Internal_ID ('Internal_ID') but the query didn't return any data. I also added single quotes to both the identifier and value, it removed error but no data was returned.
Here is sample data that should be returned.
Sample Data Set
Here is the schema for the view Schema
Thank you for your help in advance.
you schema shows the column as Internal_ID which implies you need to use double quotes to tell snowflake to not auto uppercase the column name, thus "Internal_ID". The trick is you have to have the case 100% correct when you put it inside double quotes.
You can double click the column name in the UI editor to have it past it into the SQL editor with the correct quotes/casing.
The value should not need quotes as it's a number, and single quotes is for strings.
thus:
SELECT *
FROM V_WMS_STG_BI_DMLABOR
WHERE "Internal_ID" = 5587640
should work.

Dynamic SQL Insert: Column name or number of supplied values does not match table definition

I encounter some strange behavior with a dynamic SQL Query.
In a stored procedure I construct an insert query string out of multiple Strings. I execute the insert query in the SP like that - due to single nvarchar length restrictions.
EXEC(#QuerySelectPT+#QueryFromPT+#QueryFromPT)
If I print each part of the query, put these parts together and execute them manually in Management Studio the query works fine and inserts the data. But, if i execute the query in the EXEC() Method in the stored procedure, I get a
Column name or number of supplied values does not match table definition.
Error Message.
Did multiple check on the amount, spelling of columns in my query and in my insert table, but I have not found any differences so far.
Any advices?
Your count of columns for insert are different from count of columns for select. Print the statement before exec and find the error.
It as shot in the dark but seen you are telling the queries are valid and if you build the final query manually and it is working, the issue could be caused by string truncation.
Could you try:
EXEC(CAST(#QuerySelectPT AS VARCHAR(MAX))+#QueryFromPT+#QueryFromPT);
Also, as the Management Studio's message tab and selects are limited to 4000 symbols I think, you can test if the whole query is assembled correctly like this:
SELECT CAST(#QuerySelectPT+#QueryFromPT+#QueryFromPT AS XML)

SQL Server insert or query between servers ingnores column in table starting with a number?

I am trying to query between two servers which have identical tables (used the same create statement for both). When I try to insert the results from Server A to Server B I get an error indicating "Column name or number of supplied values does not match table definition."
Query run on server A
Insert into ServerB.Database1.dbo.Table1
Select *
from Table1
The error is clear, but what isn't clear is the reason that it is generated. The definitions of the two tables are identical. What I was finally able to isolate was a table name that starts with a numeric value is not being recognized.
When I run this on ServerA:
Select *
from ServerB.Database1.dbo.Table1
The field with the numeric value is not shown in the results set of they query. The short term fix was to rename the field in the database, but why is this happening?
I am curious about the collation too, but really the answer is to wrap the object names in square brackets. i.e. SELECT [1col], [2col], [etc] FROM [1database].[2owner].[3table]. This way SQL with recognize each as an object name and not a function.
One other thing to keep in mind is to not use splat (*) in your select statement, this has potential problem of it's own. For example, you could run into an error in your Insert if the ServerA's table1 structure was change and ServerB's table one stayed the same.

Oracle considers empty strings to be NULL while SQL Server does not - how is this best handled?

I have to write a component that re-creates SQL Server tables (structure and data) in an Oracle database. This component also has to take new data entered into the Oracle database and copy it back into SQL Server.
Translating the data types from SQL Server to Oracle is not a problem. However, a critical difference between Oracle and SQL Server is causing a major headache. SQL Server considers a blank string ("") to be different from a NULL value, so a char column can be defined as NOT NULL and yet still include blank strings in the data.
Oracle considers a blank string to be the same as a NULL value, so if a char column is defined as NOT NULL, you cannot insert a blank string. This is causing my component to break whenever a NOT NULL char column contains a blank string in the original SQL Server data.
So far my solution has been to not use NOT NULL in any of my mirror Oracle table definitions, but I need a more robust solution. This has to be a code solution, so the answer can't be "use so-and-so's SQL2Oracle product".
How would you solve this problem?
Edit: here is the only solution I've come up with so far, and it may help to illustrate the problem. Because Oracle doesn't allow "" in a NOT NULL column, my component could intercept any such value coming from SQL Server and replace it with "#" (just for example).
When I add a new record to my Oracle table, my code has to write "#" if I really want to insert a "", and when my code copies the new row back to SQL Server, it has to intercept the "#" and instead write "".
I'm hoping there's a more elegant way.
Edit 2: Is it possible that there's a simpler solution, like some setting in Oracle that gets it to treat blank strings the same as all the other major database? And would this setting also be available in Oracle Lite?
I don't see an easy solution for this.
Maybe you can store your values as one or more blanks -> ' ', which aren't NULLS in Oracle, or keep track of this special case through extra fields/tables, and an adapter layer.
My typical solution would be to add a constraint in SQL Server forcing all string values in the affected columns to have a length greater than 0:
CREATE TABLE Example (StringColumn VARCHAR(10) NOT NULL)
ALTER TABLE Example
ADD CONSTRAINT CK_Example_StringColumn CHECK (LEN(StringColumn) > 0)
However, as you have stated, you have no control over the SQL Database. As such you really have four choices (as I see it):
Treat empty string values as invalid, skip those records, alert an operator and log the records in some manner that makes it easy to manually correct / re-enter.
Convert empty string values to spaces.
Convert empty string values to a code (i.e. "LEGACY" or "EMPTY").
Rollback transfers that encounter empty string values in these columns, then put pressure on the SQL Server database owner to correct their data.
Number four would be my preference, but isn't always possible. The action you take will really depend on what the oracle users need. Ultimately, if nothing can be done about the SQL database, I would explain the issue to the oracle business system owners, explain the options and consequences and make them make the decision :)
NOTE: I believe in this case SQL Server actually exhibits the "correct" behaviour.
Do you have to permit empty strings in the SQL Server system? If you can add a constraint to the SQL Server system that disallows empty strings, that is probably the easiest solution.
Its nasty and could have unexpected side effects.. but you could just insert "chr(0)" rather than ''.
drop table x
drop table x succeeded.
create table x ( id number, my_varchar varchar2(10))
create table succeeded.
insert into x values (1, chr(0))
1 rows inserted
insert into x values (2, null)
1 rows inserted
select id,length(my_varchar) from x
ID LENGTH(MY_VARCHAR)
---------------------- ----------------------
1 1
2
2 rows selected
select * from x where my_varchar is not null
ID MY_VARCHAR
---------------------- ----------
1
NOT NULL is a database constraint used to stop putting invalid data into your database. This is not serving any purpose in your Oracle database and so I would not have it.
I think you should just continue to allow NULLS in any Oracle column that mirrors a SqlServer column that is known to contain empty strings.
If there is a logical difference in the SqlServer database between NULL and empty string, then you would need something extra to model this difference in Oracle.
I'd go with an additional column on the oracle side. Have your column allow nulls and have a second column that identifies whether the SQL-Server side should get a null-value or empty-string for the row.
For those that think a Null and an empty string should be considered the same. A null has a different meaning from an empty string. It captures the difference between 'undefined' and 'known to be blank'. As an example a record may have been automatically created, but never validated by user input, and thus receive a 'null' in the expectation that when a user validates it, it will be set to be empty. Practically we may not want to trigger logic on a null but may want to on an empty string. This is analogous to the case for a 3 state checkbox of Yes/No/Undefined.
Both SQL and Oracle have not got it entirely correct. A blank should not satisfy a 'not null' constraint, and there is a need for an empty string to be treated differently than a null is treated.
If you are migrating data you might have to substitute a space for an empty string. Not very elegant, but workable. This is a nasty "feature" of Oracle.
I've written an explanation on how Oracle handles null values on my blog a while ago. Check it here: http://www.psinke.nl/blog/hello-world/ and let me know if you have any more questions.
If you have data from a source with empty values and you must convert to an Oracle database where columns are NOT NULL, there are 2 things you can do:
remove the not null constraint from the Oracle column
Check for each individual column if it's acceptable to place a ' ' or 0 or dummy date in the column in order to be able to save your data.
Well, main point I'd consider is absence of tasks when some field can be null, the same field can be empty string and business logic requires to distinguish these values. So I'd make this logic:
check MSSQL if column has NOT NULL constraint
check MSSQL if column has CHECK(column <> '') or similar constraint
If both are true, make Oracle column NOT NULL. If any one is true, make Oracle column NULL. If none is true, raise INVALID DESIGN exception (or maybe ignore it, if it's acceptable by this application).
When sending data from MSSQL to Oracle, just do nothing special, all data would be transferred right. When retrieving data to MSSQL, any not-null data should be sent as is. For null strings you should decide whether it should be inserted as null or as empty string. To do this you should check table design again (or remember previous result) and see if it has NOT NULL constraint. If has - use empty string, if has not - use NULL. Simple and clever.
Sometimes, if you work with unknown and unpredictable application, you cannot check for existence of {not empty string} constraint because of various forms of it. If so, you can either use simplified logic (make Oracle columns always nullable) or check whether you can insert empty string into MSSQL table without error.
Although, for the most part, I agree with most of the other responses (not going to get into an argument about any I disagree with - not the place for that :) )
I do notice that OP mentioned the following:
"Oracle considers a blank string to be the same as a NULL value, so if a char column is defined as NOT NULL, you cannot insert a blank string."
Specifically calling out CHAR, and not VARCHAR2.
Hence, talking about an "empty string" of length 0 (ie '' ) is moot.
If he's declared the CHAR as, for example, CHAR(5), then just add a space to the empty string coming in, Oracle's going to pad it anyway. You'll end up with a 5 space string.
Now, if OP meant VARCHAR2, well yeah, that's a whole other beast, and yeah, the difference between empty string and NULL becomes relevant.
SQL> drop table junk;
Table dropped.
SQL>
SQL> create table junk ( c1 char(5) not null );
Table created.
SQL>
SQL> insert into junk values ( 'hi' );
1 row created.
SQL>
SQL> insert into junk values ( ' ' );
1 row created.
SQL>
SQL> insert into junk values ( '' );
insert into junk values ( '' )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("GREGS"."JUNK"."C1")
SQL>
SQL> insert into junk values ( rpad('', 5, ' ') );
insert into junk values ( rpad('', 5, ' ') )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("GREGS"."JUNK"."C1")
SQL>
SQL> declare
2 lv_in varchar2(5) := '';
3 begin
4 insert into junk values ( rpad(lv_in||' ', 5) );
5 end;
6 /
PL/SQL procedure successfully completed.
SQL>

How to convert chinese characters to AL16UTF16 or WE8ISO8859P1?

I have inserted into database some chinese characters. (Column name is NAME, data type is VARCHAR2)
My project name is: 中文版测试 and I need to select project by this name.
But.
In oracle database are inserted 中文版测试 with name : ÖÐÎÄ°æ²âÊÔ (If I understand right my database has a set with the name WE8ISO8859P1)
I want to convert this characters from database (ÖÐÎÄ°æ²âÊÔ) to chinese characters (中文版测试) or to a same values to compare.
I try this:
select DIRNAME from MILLENNIUM.PROJECTINFO where UPPER(convert(NAME, 'AL32UTF8', 'we8iso8859p1')) = UPPER(convert('中文版测试', 'WE8MSWIN1252', 'AL32UTF8'));
I need to compare values from oracle with the name of the project.
Oracle settings:
NLS_CHARACTERSET WE8ISO8859P1 0
NLS_NCHAR_CHARACTERSET AL16UTF16 0
AS Michael O'Neill already pointed out it is not possible to store Chinese characters in character set WE8ISO8859P1. All unsupported characters are automatically replaced by ¿ (or any other place holder)
BTW, WE8ISO8859P1 is different to WE8MSWIN1252 (see What is the exact difference between Windows-1252(1/3/4) and ISO-8859-1?), so your conversion does not work anyway.
Solution is to change data type of column NAME to NVARCHAR2 or migrate your database to UTF-8, see Character Set Migration and Database Migration Assistant for Unicode Guide. In any case you should consider your data being lost, resp. corrupted.
However, in case your client application was configured wrongly then in certain circumstances it is possible to insert unsupported characters, see If we have US7ASCII characterset why does it let us store non-ascii characters?.
In such case you can try to repair your data as this:
ALTER TABLE PROJECTINFO ADD NAME_CN NVARCHAR2(100);
UPDATE PROJECTINFO SET NAME_CN = UTL_I18N.RAW_TO_NCHAR(UTL_I18N.STRING_TO_RAW(NAME), 'ZHS16CGB231280');
ALTER TABLE PROJECTINFO DROP COLUMN NAME;
ALTER TABLE PROJECTINFO RENAME COLUMN NAME_CN TO NAME;
select DIRNAME from MILLENNIUM.PROJECTINFO where NAME = '中文版测试';
but it may not work for all of your data.
Hence a (not recommended) workaround for your problem could be
select DIRNAME
from MILLENNIUM.PROJECTINFO
where UTL_I18N.RAW_TO_NCHAR(UTL_I18N.STRING_TO_RAW(NAME), 'ZHS16CGB231280') = '中文版测试';
You cannot take Chinese characters, insert them into a column that is bound by the WE8ISO8859P1 character set and then select them ever again as Chinese characters. You have lost information on your insert. That lost information cannot be reconstituted.
In your case, the NAME column if it were defined as NVARCHAR2, you could do a AL16UTF16 to AL16UTF16 comparison in a subsequent SELECT. Or, even better, not need to convert and compare with AL16UTF16 at all if your client tool is up to the task.

Resources