I'm working on a legacy system using SQL Server in 2000 compatibility mode. There's a stored procedure that selects from a query into a virtual table.
When I run the query, I get the following error:
Error converting data type varchar to numeric
which initially tells me that something stringy is trying to make its way into a numeric column.
To debug, I created the virtual table as a physical table and started eliminating each column.
The culprit column is called accnum (which stores a bank account number, which has a source data type of varchar(21)), which I'm trying to insert into a numeric(16,0) column, which obviously could cause issues.
So I made the accnum column varchar(21) as well in the physical table I created and it imports 100%. I also added an additional column called accnum2 and made it numeric(16,0).
After the data is imported, I proceeded to update accnum2 to the value of accnum. Lo and behold, it updates without an error, yet it wouldn't work with an insert into...select query.
I have to work with the data types provided. Any ideas how I can get around this?
Can you try to use conversion in your insert statement like this:
SELECT [accnum] = CASE ISNUMERIC(accnum)
WHEN 0 THEN NULL
ELSE CAST(accnum AS NUMERIC(16, 0))
END
We have a SQL Server 2008 database with a table containing more than 1.4 billion records. Due to adjustments of the coordinate system, we have to expand the datatype of the coordinate column from decimal(18, 2) to decimal(18, 3).
We've tried multiple things but everything resulted in an exception (transactionlog is full) after about 14 hours of execution.
These are the things we tried:
Alter Table
ALTER TABLE Adress
ALTER COLUMN Coordinate decimal(18, 3) NULL
Designer
Uncheck Tools > Options > Designer > Prevent saving changes that require table re-creation
Open Designer
Change datatype of column to decimal(18, 3)
Right-click > Generate Change Script...
What this script does, is creating a new table with the new datatype, copying the old data to the new table, drop the old table and rename the new table.
Unfortunately both attempts result in a transaction log full exception after 14 hours of execution.
I thought, that changing the datatype via ALTER TABLE... ALTER COLUMN... is only changing the metadata and should be finished in the matter of (milli)seconds?
Do you know of any other method I could try?
Why are my attempts (especially #1) needing that much time?
Thanks in advance
Well the main issue seems large amount of data saved into the table. Your both attempts also seem fine. They both will definitely take time I must say as the data is large.
Each time you alter a column data type the SQL SERVER tries to convert existing data into targeted data type. Processing the conversion on large amount of data may cause delay in execution.
Moreover I wonder if you have any trigger on the table.?
Well! Finally I would suggest you following steps. Give it a try at least
Remove any primary keys/indexes/constraints pointing to the old column, and disable any trigger (if there is any).
Introduce a new nullable column with the new data type (even if
it is meant to be NOT NULL) to the table.
Now make an update query on the table which will set the new column value to the old column value. You can do updating in chunks while updating 1000/100000 batches of the records. And also you can apply conditions to the query for better results.
Once you update all the table by setting new column values to old column then remove the NULL character to NOT NULL from designer (if it is meant to be NOT NULL).
Drop/Delete the old column. Perform Select Query and Verify Your Changes.
Last Point I should add is your database transaction log is also full which can be shrunk but with some precautions. Here is very good example how to reset your transaction log. Should take a look at this too.
Hope This Helps. :)
The solution is to do the updates in batches, easing the pressure on the log file.
Method 1:
a) Create a new table with the new definition.
b) Copy the data to the new table in batches.
c) Drop the old table.
d) Rename the new table.
Method 2:
a) Create a new column with the correct definition.
b) Update the new column with data from the old column in batches.
c) Drop the old column.
d) Rename the new column.
Method 3:
a) BCP the data into a file.
b) Truncate the table.
c) Alter the column.
d) Set the recovery model to bulk logged or simple.
e) BCP the data from the file into the table.
f) Set the recovery model back to full.
Add new column as the last column
If you try to insert before the last column it could take a long time
NewCoordinate decimal(18, 3) NULL
select 1
while(##rowcount > 0)
BEGIN
UPDATE TOP(10000) Adress
SET NewCoordinate = Coordinate
WHERE NewCoordinate <> Coordinate
END
That is my suggestion:
ADD a field to your table and name it like below:
NewCoordinate DECIMAL(18, 3) NULL
WHILE(1 = 1)
BEGIN
UPDATE TOP(1000) Adress SET NewCoordinate = Coordinate
WHERE NewCoordinate IS NULL
IF (##ROWCOUNT < 1000)
BREAK
END
Try to keep your transaction like small.
And Finaly drop your Coordinate field.
I'd like some advice about handling truncation issues in SSIS. I have a column Col1 which is MONEY in a table. I'd like to output that to a text file (fixed width, ragged right). In the output file, the column which holds Col1 must only be 8 characters wide.
In the OLEDB Data Source, Col1 is specified as:
currency [DT_CY] in both the External Columns and Output Columns tab.
In the Flat File Connection Manager's Advanced tab, Col1 is specified as:
currency [DT_CY], with InputColumnWidth set to 8.
If I populate Col1 with 123456789.00 and execute the task, the OLEDB source succeeds and passes rows to the destination, but the task fails with :
Error: 0xC02020A1 at DFT_Test, FFDEST_Test [3955]: Data conversion
failed. The data conversion for column "Col1" returned status value 4
and status text "Text was truncated or one or more characters had no
match in the target code page.". Error: 0xC02020A0 at DFT_Test,
FFDEST_Test [3955]: Cannot copy or convert flat file data for column
"Col1".
I want to avoid these truncation errors. In the Error Output of the source, I change the Truncation property for Col1 from Fail Component to Ignore Failure. I would have expected that would resolve the issue, but executing the task still gives the same error.
Can someone give some guidance about how to make SSIS simply truncate the column to 8 charactes?
Use a Derived Column task to create a column that is an 8-character string and populate it from the money column. Then in the Destination component, map the Derived Column to the Col1 Destination instead of the original column.
Or, even better, in your source component, use a SQL query that converts your money column to a varchar(8) or char(8) column.
My problem is this: I should update some values in my products table. The column should save a html text. So I alter my table from varchar to text type. Then I try to update the value with the content.
However, this throws an error:
string or binary date must be truncated
What could be the problem because the length property does not the problem?
Thanks in advance.
I have to write a component that re-creates SQL Server tables (structure and data) in an Oracle database. This component also has to take new data entered into the Oracle database and copy it back into SQL Server.
Translating the data types from SQL Server to Oracle is not a problem. However, a critical difference between Oracle and SQL Server is causing a major headache. SQL Server considers a blank string ("") to be different from a NULL value, so a char column can be defined as NOT NULL and yet still include blank strings in the data.
Oracle considers a blank string to be the same as a NULL value, so if a char column is defined as NOT NULL, you cannot insert a blank string. This is causing my component to break whenever a NOT NULL char column contains a blank string in the original SQL Server data.
So far my solution has been to not use NOT NULL in any of my mirror Oracle table definitions, but I need a more robust solution. This has to be a code solution, so the answer can't be "use so-and-so's SQL2Oracle product".
How would you solve this problem?
Edit: here is the only solution I've come up with so far, and it may help to illustrate the problem. Because Oracle doesn't allow "" in a NOT NULL column, my component could intercept any such value coming from SQL Server and replace it with "#" (just for example).
When I add a new record to my Oracle table, my code has to write "#" if I really want to insert a "", and when my code copies the new row back to SQL Server, it has to intercept the "#" and instead write "".
I'm hoping there's a more elegant way.
Edit 2: Is it possible that there's a simpler solution, like some setting in Oracle that gets it to treat blank strings the same as all the other major database? And would this setting also be available in Oracle Lite?
I don't see an easy solution for this.
Maybe you can store your values as one or more blanks -> ' ', which aren't NULLS in Oracle, or keep track of this special case through extra fields/tables, and an adapter layer.
My typical solution would be to add a constraint in SQL Server forcing all string values in the affected columns to have a length greater than 0:
CREATE TABLE Example (StringColumn VARCHAR(10) NOT NULL)
ALTER TABLE Example
ADD CONSTRAINT CK_Example_StringColumn CHECK (LEN(StringColumn) > 0)
However, as you have stated, you have no control over the SQL Database. As such you really have four choices (as I see it):
Treat empty string values as invalid, skip those records, alert an operator and log the records in some manner that makes it easy to manually correct / re-enter.
Convert empty string values to spaces.
Convert empty string values to a code (i.e. "LEGACY" or "EMPTY").
Rollback transfers that encounter empty string values in these columns, then put pressure on the SQL Server database owner to correct their data.
Number four would be my preference, but isn't always possible. The action you take will really depend on what the oracle users need. Ultimately, if nothing can be done about the SQL database, I would explain the issue to the oracle business system owners, explain the options and consequences and make them make the decision :)
NOTE: I believe in this case SQL Server actually exhibits the "correct" behaviour.
Do you have to permit empty strings in the SQL Server system? If you can add a constraint to the SQL Server system that disallows empty strings, that is probably the easiest solution.
Its nasty and could have unexpected side effects.. but you could just insert "chr(0)" rather than ''.
drop table x
drop table x succeeded.
create table x ( id number, my_varchar varchar2(10))
create table succeeded.
insert into x values (1, chr(0))
1 rows inserted
insert into x values (2, null)
1 rows inserted
select id,length(my_varchar) from x
ID LENGTH(MY_VARCHAR)
---------------------- ----------------------
1 1
2
2 rows selected
select * from x where my_varchar is not null
ID MY_VARCHAR
---------------------- ----------
1
NOT NULL is a database constraint used to stop putting invalid data into your database. This is not serving any purpose in your Oracle database and so I would not have it.
I think you should just continue to allow NULLS in any Oracle column that mirrors a SqlServer column that is known to contain empty strings.
If there is a logical difference in the SqlServer database between NULL and empty string, then you would need something extra to model this difference in Oracle.
I'd go with an additional column on the oracle side. Have your column allow nulls and have a second column that identifies whether the SQL-Server side should get a null-value or empty-string for the row.
For those that think a Null and an empty string should be considered the same. A null has a different meaning from an empty string. It captures the difference between 'undefined' and 'known to be blank'. As an example a record may have been automatically created, but never validated by user input, and thus receive a 'null' in the expectation that when a user validates it, it will be set to be empty. Practically we may not want to trigger logic on a null but may want to on an empty string. This is analogous to the case for a 3 state checkbox of Yes/No/Undefined.
Both SQL and Oracle have not got it entirely correct. A blank should not satisfy a 'not null' constraint, and there is a need for an empty string to be treated differently than a null is treated.
If you are migrating data you might have to substitute a space for an empty string. Not very elegant, but workable. This is a nasty "feature" of Oracle.
I've written an explanation on how Oracle handles null values on my blog a while ago. Check it here: http://www.psinke.nl/blog/hello-world/ and let me know if you have any more questions.
If you have data from a source with empty values and you must convert to an Oracle database where columns are NOT NULL, there are 2 things you can do:
remove the not null constraint from the Oracle column
Check for each individual column if it's acceptable to place a ' ' or 0 or dummy date in the column in order to be able to save your data.
Well, main point I'd consider is absence of tasks when some field can be null, the same field can be empty string and business logic requires to distinguish these values. So I'd make this logic:
check MSSQL if column has NOT NULL constraint
check MSSQL if column has CHECK(column <> '') or similar constraint
If both are true, make Oracle column NOT NULL. If any one is true, make Oracle column NULL. If none is true, raise INVALID DESIGN exception (or maybe ignore it, if it's acceptable by this application).
When sending data from MSSQL to Oracle, just do nothing special, all data would be transferred right. When retrieving data to MSSQL, any not-null data should be sent as is. For null strings you should decide whether it should be inserted as null or as empty string. To do this you should check table design again (or remember previous result) and see if it has NOT NULL constraint. If has - use empty string, if has not - use NULL. Simple and clever.
Sometimes, if you work with unknown and unpredictable application, you cannot check for existence of {not empty string} constraint because of various forms of it. If so, you can either use simplified logic (make Oracle columns always nullable) or check whether you can insert empty string into MSSQL table without error.
Although, for the most part, I agree with most of the other responses (not going to get into an argument about any I disagree with - not the place for that :) )
I do notice that OP mentioned the following:
"Oracle considers a blank string to be the same as a NULL value, so if a char column is defined as NOT NULL, you cannot insert a blank string."
Specifically calling out CHAR, and not VARCHAR2.
Hence, talking about an "empty string" of length 0 (ie '' ) is moot.
If he's declared the CHAR as, for example, CHAR(5), then just add a space to the empty string coming in, Oracle's going to pad it anyway. You'll end up with a 5 space string.
Now, if OP meant VARCHAR2, well yeah, that's a whole other beast, and yeah, the difference between empty string and NULL becomes relevant.
SQL> drop table junk;
Table dropped.
SQL>
SQL> create table junk ( c1 char(5) not null );
Table created.
SQL>
SQL> insert into junk values ( 'hi' );
1 row created.
SQL>
SQL> insert into junk values ( ' ' );
1 row created.
SQL>
SQL> insert into junk values ( '' );
insert into junk values ( '' )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("GREGS"."JUNK"."C1")
SQL>
SQL> insert into junk values ( rpad('', 5, ' ') );
insert into junk values ( rpad('', 5, ' ') )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("GREGS"."JUNK"."C1")
SQL>
SQL> declare
2 lv_in varchar2(5) := '';
3 begin
4 insert into junk values ( rpad(lv_in||' ', 5) );
5 end;
6 /
PL/SQL procedure successfully completed.
SQL>