Varchar2 in oracle update query from SSIS project - sql-server

I need to make an update to in Oracle's database using SSIS. I am using the custom database task and using a query like:
UPDATE Table SET column1 = ? where KEY = ?
The key is taken from SQL Server table and is a type of nvarchar(3), the key in oracle database is of type varchar2(3). First it was complaining that the key is 4 character so I change the query to
UPDATE Table SET column1 = ? where KEY = TRIM(CAST(? AS VARCHAR(3)))
It is working for keys which has the 3 characters, but there are also the 2 characters long one. I've tried trim it, convert it. But I cannot make it work for 2 characters keys.
Oracle chararcter set for char is AL32UTF8 and for NCHAR - AL16UTF16.

I've resolved the problem by creating the derived column with KEY_LEN as LEN(key) before the update step. Then I've used it in update as the third param:
UPDATE Table SET column1 = ? where KEY = SUBSTR(?,0,?)

Related

SSIS Pass variable to Execute SQL Update [duplicate]

I have ssis package in that I'm taking values from flat file and insert it into table.
I have taken one Execute SQL Task in that creating one temptable
CREATE TABLE [tempdb].dbo.##temptable
(
date datetime,
companyname nvarchar(50),
price decimal(10,0),
PortfolioId int,
stype nvarchar(50)
)
Insert into [tempdb].dbo.##temptable (date,companyname,price,PortfolioId,stype)
SELECT date,companyname,price,PortfolioId,stype
FROM ProgressNAV
WHERE (Date = '2011-09-30') AND (PortfolioId = 5) AND (stype in ('Index'))
ORDER BY CompanyName
Now in above query I need to pass (Date = '2011-09-30') AND (PortfolioId = 5) AND (stype in ('Index'))
these 3 parameter using variable name I have created variables in package so that I become dynamic.
In your Execute SQL Task, make sure SQLSourceType is set to Direct Input, then your SQL Statement is the name of the stored proc, with questionmarks for each paramter of the proc, like so:
Click the parameter mapping in the left column and add each paramter from your stored proc and map it to your SSIS variable:
Now when this task runs it will pass the SSIS variables to the stored proc.
The EXCEL and OLED DB connection managers use the parameter names 0 and 1.
I was using a oledb connection and wasted couple of hours trying to figure out the reason why the query was not working or taking the parameters. the above explanation helped a lot
Thanks a lot.
Along with #PaulStock's answer, Depending on your connection type, your variable names and SQLStatement/SQLStatementSource Changes
https://learn.microsoft.com/en-us/sql/integration-services/control-flow/execute-sql-task
SELECT, INSERT, UPDATE, and DELETE commands frequently include WHERE clauses to specify filters that define the conditions each row in the source tables must meet to qualify for an SQL command. Parameters provide the filter values in the WHERE clauses.
You can use parameter markers to dynamically provide parameter values. The rules for which parameter markers and parameter names can be used in the SQL statement depend on the type of connection manager that the Execute SQL uses.
The following table lists examples of the SELECT command by connection manager type. The INSERT, UPDATE, and DELETE statements are similar. The examples use SELECT to return products from the Product table in AdventureWorks2012 that have a ProductID greater than and less than the values specified by two parameters.
EXCEL, ODBC, and OLEDB
SELECT* FROM Production.Product WHERE ProductId > ? AND ProductID < ?
ADO
SELECT * FROM Production.Product WHERE ProductId > ? AND ProductID < ?
ADO.NET
SELECT* FROM Production.Product WHERE ProductId > #parmMinProductID
AND ProductID < #parmMaxProductID
The examples would require parameters that have the following names:
The EXCEL and OLED DB connection managers use the parameter names 0 and 1. The ODBC connection type uses 1 and 2.
The ADO connection type could use any two parameter names, such as Param1 and Param2, but the parameters must be mapped by their ordinal position in the parameter list.
The ADO.NET connection type uses the parameter names #parmMinProductID and #parmMaxProductID.
A little late to the party, but this is how I did it for an insert:
DECLARE #ManagerID AS Varchar (25) = 'NA'
DECLARE #ManagerEmail AS Varchar (50) = 'NA'
Declare #RecordCount AS int = 0
SET #ManagerID = ?
SET #ManagerEmail = ?
SET #RecordCount = ?
INSERT INTO...

ArcGIS SQL Server ArcSDE: Versioning vs Identity Column

[I am asking here instead of GIS Stackexchange because this maybe more of a SQL Server issue?]
I have SQL Server ArcSDE connection in which data is batch inserted via some scripts. Currently, anytime there is a new row of data then an 'OBJECTID' column, set to INT and Identity Column increases by number 1. So far so good. Except I need to enable "versioning" on the table.
And so I follow this: http://resources.arcgis.com/en/help/main/10.1/index.html#//003n000000v3000000
but get errors because ArcGIS is complaining about the Identity column, per: http://support.esri.com/cn/knowledgebase/techarticles/detail/40329 ; and when I remove the Identity attribute to the column then the column value becomes NULL--not good.
So, in my scenario, how I can I increase the value of OBJECTID by 1 number as auto-increment? I supposed, I can just insert some GUID into the 'OBJECTID' field through the script? Also, if I follow the GUID route then I am not sure if I will be able to add rows manually via ArcGIS Desktop on occasional basis?
Thanks!
Update 1 Okay, so changed the OBJECTID field to a 'uniqueidentifier' one with a default GUID value and now I am able to enable "versioning" using ArcGIS Desktop. However, ArcGIS is expecting GUID to be an INT data type--and so no go?
In light of my "update 1" in the Question above I managed to take care of this by inserting an INT value for OBJECTID during the batch insertions per the following: How to insert an auto_increment key into SQL Server table
so per the above link, I ended doing:
INSERT INTO bo.TABLE (primary_key, field1, fiels2) VALUES ((SELECT ISNULL(MAX(id) + 1, 0) FROM bo.Table), value1, value2)
EXCEPT in my case the IDENTITY remains not 'ON' at all either in the database or, unlike the above link, I didn't have to set Identity On/Off during batch insertions; Works for some reasons anyway!

Inserting arabic characters into sql server database from grails

I have a grails application that is intended to insert arabic characters from interface to sql server database. First I did it with field type varchar but it appears ???????. Then I found that the field type should be nvarchar but with, application simply fails to insert any row by giving the following error,
java.sql.SQLException: Cannot insert the value NULL into column 'id', table 'smsCampaigner.dbo.message'; column does not allow nulls. INSERT fails.
I know that this is constraints error that id cannot be null but this was not the case before changing the field type. Now I want to know whether we can specify fro config.groovy which encoding should be used in database ? any other type of help will be appreciated.
here you go..
when you create the domain class use String type for that column and that's it. you could insert any character it will accept.
for example TestDomain.groovy
class TestDomain {
String column1
}
In mysql you can set up the collation of the database/table/column to use utf8_unicode_ci and if I am not wrong that is valid to save arabic chars. Not sure how to do it with SQL Server but it should be something similar.
Change the collation of the database and tables and columns to Arabic_CI_AS
In Sql Server management studio (SSMS), Execute :
ALTER DATABASE DB06 COLLATE Arabic_CI_AS
where DB06 is the name of the database.
right-click on table, then click design, then select the desired column, in its properties, choose the collation to Arabic_CI_AS
by the way, the error :
Cannot insert the value NULL into column
is not related to Arabic, this is because you are trying to insert NULL in the column that does not allow NULL.

Data migration from MySQL to HSQL

I was working on migrating data from MYSQL to HSQL.
In MYSQL data file, there are plenty of records where date values are set as '0000-00-00' and HSQL database throws below error:
"data exception: invalid datetime format / Error Code: -3407 / State:
22007"
for all such records.
I would like to know what could be optimum solution for this problem?
Thanks in advance
HSQLDB follows the SQL Standard and allows valid dates only. A date such as '0001-01-01' would be a good candidate for the default value.
Regardless of the method used for data inserts, the '0000-00-00' strings should be corrected before insert. One way of doing this is to use a default value for the target column with DEFAULT DATE'0001-01-01' and replace the string in the INSERT statement with the keyword DEFAULT. For example:
CREATE TABLE MYTABLE ( C1 INT, C2 DATE DEFAULT DATE'0001-01-01')
INSERT INTO MYTABLE VALUES 1, DEFAULT
INSERT INTO MYTABLE VALUES 3, '2010-08-14'

Oracle considers empty strings to be NULL while SQL Server does not - how is this best handled?

I have to write a component that re-creates SQL Server tables (structure and data) in an Oracle database. This component also has to take new data entered into the Oracle database and copy it back into SQL Server.
Translating the data types from SQL Server to Oracle is not a problem. However, a critical difference between Oracle and SQL Server is causing a major headache. SQL Server considers a blank string ("") to be different from a NULL value, so a char column can be defined as NOT NULL and yet still include blank strings in the data.
Oracle considers a blank string to be the same as a NULL value, so if a char column is defined as NOT NULL, you cannot insert a blank string. This is causing my component to break whenever a NOT NULL char column contains a blank string in the original SQL Server data.
So far my solution has been to not use NOT NULL in any of my mirror Oracle table definitions, but I need a more robust solution. This has to be a code solution, so the answer can't be "use so-and-so's SQL2Oracle product".
How would you solve this problem?
Edit: here is the only solution I've come up with so far, and it may help to illustrate the problem. Because Oracle doesn't allow "" in a NOT NULL column, my component could intercept any such value coming from SQL Server and replace it with "#" (just for example).
When I add a new record to my Oracle table, my code has to write "#" if I really want to insert a "", and when my code copies the new row back to SQL Server, it has to intercept the "#" and instead write "".
I'm hoping there's a more elegant way.
Edit 2: Is it possible that there's a simpler solution, like some setting in Oracle that gets it to treat blank strings the same as all the other major database? And would this setting also be available in Oracle Lite?
I don't see an easy solution for this.
Maybe you can store your values as one or more blanks -> ' ', which aren't NULLS in Oracle, or keep track of this special case through extra fields/tables, and an adapter layer.
My typical solution would be to add a constraint in SQL Server forcing all string values in the affected columns to have a length greater than 0:
CREATE TABLE Example (StringColumn VARCHAR(10) NOT NULL)
ALTER TABLE Example
ADD CONSTRAINT CK_Example_StringColumn CHECK (LEN(StringColumn) > 0)
However, as you have stated, you have no control over the SQL Database. As such you really have four choices (as I see it):
Treat empty string values as invalid, skip those records, alert an operator and log the records in some manner that makes it easy to manually correct / re-enter.
Convert empty string values to spaces.
Convert empty string values to a code (i.e. "LEGACY" or "EMPTY").
Rollback transfers that encounter empty string values in these columns, then put pressure on the SQL Server database owner to correct their data.
Number four would be my preference, but isn't always possible. The action you take will really depend on what the oracle users need. Ultimately, if nothing can be done about the SQL database, I would explain the issue to the oracle business system owners, explain the options and consequences and make them make the decision :)
NOTE: I believe in this case SQL Server actually exhibits the "correct" behaviour.
Do you have to permit empty strings in the SQL Server system? If you can add a constraint to the SQL Server system that disallows empty strings, that is probably the easiest solution.
Its nasty and could have unexpected side effects.. but you could just insert "chr(0)" rather than ''.
drop table x
drop table x succeeded.
create table x ( id number, my_varchar varchar2(10))
create table succeeded.
insert into x values (1, chr(0))
1 rows inserted
insert into x values (2, null)
1 rows inserted
select id,length(my_varchar) from x
ID LENGTH(MY_VARCHAR)
---------------------- ----------------------
1 1
2
2 rows selected
select * from x where my_varchar is not null
ID MY_VARCHAR
---------------------- ----------
1
NOT NULL is a database constraint used to stop putting invalid data into your database. This is not serving any purpose in your Oracle database and so I would not have it.
I think you should just continue to allow NULLS in any Oracle column that mirrors a SqlServer column that is known to contain empty strings.
If there is a logical difference in the SqlServer database between NULL and empty string, then you would need something extra to model this difference in Oracle.
I'd go with an additional column on the oracle side. Have your column allow nulls and have a second column that identifies whether the SQL-Server side should get a null-value or empty-string for the row.
For those that think a Null and an empty string should be considered the same. A null has a different meaning from an empty string. It captures the difference between 'undefined' and 'known to be blank'. As an example a record may have been automatically created, but never validated by user input, and thus receive a 'null' in the expectation that when a user validates it, it will be set to be empty. Practically we may not want to trigger logic on a null but may want to on an empty string. This is analogous to the case for a 3 state checkbox of Yes/No/Undefined.
Both SQL and Oracle have not got it entirely correct. A blank should not satisfy a 'not null' constraint, and there is a need for an empty string to be treated differently than a null is treated.
If you are migrating data you might have to substitute a space for an empty string. Not very elegant, but workable. This is a nasty "feature" of Oracle.
I've written an explanation on how Oracle handles null values on my blog a while ago. Check it here: http://www.psinke.nl/blog/hello-world/ and let me know if you have any more questions.
If you have data from a source with empty values and you must convert to an Oracle database where columns are NOT NULL, there are 2 things you can do:
remove the not null constraint from the Oracle column
Check for each individual column if it's acceptable to place a ' ' or 0 or dummy date in the column in order to be able to save your data.
Well, main point I'd consider is absence of tasks when some field can be null, the same field can be empty string and business logic requires to distinguish these values. So I'd make this logic:
check MSSQL if column has NOT NULL constraint
check MSSQL if column has CHECK(column <> '') or similar constraint
If both are true, make Oracle column NOT NULL. If any one is true, make Oracle column NULL. If none is true, raise INVALID DESIGN exception (or maybe ignore it, if it's acceptable by this application).
When sending data from MSSQL to Oracle, just do nothing special, all data would be transferred right. When retrieving data to MSSQL, any not-null data should be sent as is. For null strings you should decide whether it should be inserted as null or as empty string. To do this you should check table design again (or remember previous result) and see if it has NOT NULL constraint. If has - use empty string, if has not - use NULL. Simple and clever.
Sometimes, if you work with unknown and unpredictable application, you cannot check for existence of {not empty string} constraint because of various forms of it. If so, you can either use simplified logic (make Oracle columns always nullable) or check whether you can insert empty string into MSSQL table without error.
Although, for the most part, I agree with most of the other responses (not going to get into an argument about any I disagree with - not the place for that :) )
I do notice that OP mentioned the following:
"Oracle considers a blank string to be the same as a NULL value, so if a char column is defined as NOT NULL, you cannot insert a blank string."
Specifically calling out CHAR, and not VARCHAR2.
Hence, talking about an "empty string" of length 0 (ie '' ) is moot.
If he's declared the CHAR as, for example, CHAR(5), then just add a space to the empty string coming in, Oracle's going to pad it anyway. You'll end up with a 5 space string.
Now, if OP meant VARCHAR2, well yeah, that's a whole other beast, and yeah, the difference between empty string and NULL becomes relevant.
SQL> drop table junk;
Table dropped.
SQL>
SQL> create table junk ( c1 char(5) not null );
Table created.
SQL>
SQL> insert into junk values ( 'hi' );
1 row created.
SQL>
SQL> insert into junk values ( ' ' );
1 row created.
SQL>
SQL> insert into junk values ( '' );
insert into junk values ( '' )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("GREGS"."JUNK"."C1")
SQL>
SQL> insert into junk values ( rpad('', 5, ' ') );
insert into junk values ( rpad('', 5, ' ') )
*
ERROR at line 1:
ORA-01400: cannot insert NULL into ("GREGS"."JUNK"."C1")
SQL>
SQL> declare
2 lv_in varchar2(5) := '';
3 begin
4 insert into junk values ( rpad(lv_in||' ', 5) );
5 end;
6 /
PL/SQL procedure successfully completed.
SQL>

Resources