How do I copy a CLOB from Oracle into SQL Server - sql-server

The CLOB is XML data that is > 8k (sometimes > 32k). Any suggestions?

Unfortunatly I was unable to run it within SQL Server, so I wrote a C# console application to import and parse the CLOB data, then to write out the results to SQL Server.

You may have to use the IMAGE type, as the BINARY type is limited to (IIRC) 8000 bytes in SQLServer 2K. The limits for varchar and varbinary increased in SQLServer 2005, so it depends on what your target is. For 2005, if your data is ASCII varchar will work, if it's unicode text use nvarchar, otherwise use varbinary.
If you're looking for sample code, you'll have to give us more information, like what language/platform you're using, and how you're accessing the two databases. Also, is this a one-time transfer, or something that you need to do programmatically in production?

Related

How to store very very large data more than varchar(max) in a SQL Server column?

I am trying to save json data in SQL Server 2012. Size of that data exceeds the varchar(max) size and hence SQL Server truncates the remaining text. What is the solution to store more data?
Sql Server has a FileStream feature that allows you to store data that doesn't fit in a standard varchar(max) field. There is also another option (that uses FILESTREAM under the covers) called FileTables that allow you to store a file on the file system but access it directly from T-SQL. It is rather slick but my colleagues and I found the learning curve to be quite steep; lots of little quirks you have to get used to.

Migration of SQL Server DB to HSQLDB alternative for varchar(max)

I am working on requirement to migrate Microsoft SQL Server to HSQL Database.
What will be the alternative for varchar(max) from SQL Server to other data type in HSQL Database?
You can use VARCHAR with a large maximum size, for example VARCHAR(1000000). Check the maximum size of the strings in that column in the SQLServre database and use a larger value. If the strings are typically longer than 32000 characters, you can consider using CLOB instead.

Why does xp_msver return unicode on some installations of SQL Server?

Looking at various installations of SQL Server, sometimes the 4th column returned by xp_msver will sometimes be nvarchar and sometimes it will be varchar. This appears to have no bearing on the version of SQL Server, since I see some copies of SQL Server 2000 up to 2012 return varchar, while others return nvarchar. This also does not seem to have a bearing on Windows version or bitness.
Why does this happen? and is there a way to either configure the output or know what data type will be used beforehand?
Edit: I am using Visual FoxPro to query this information, which has a number of issues dealing with unicode. So, I need to know how to handle the data and convert it to ANSI/single byte encoding - if it isn't already. I understand the limitations of ANSI/single byte, but the loss of data is considered acceptable here.
sqlexec(connhandle, "exec xp_msver")
If ADO were in the picture, I would just use the data type properties inherit in RecordSets, but i am limited to FoxPro and its own cursor functionality. When pulled into FoxPro, the Character_Value column - the 4th column in question here - is considered a MEMO data type, which is a fancy way of saying a string (of some kind or even binary data) possibly longer than 255 characters. It is really a catchall for long strings and any data types that FoxPro cannot handle, which is extremely unhelpful in this case.
There is a Microsoft KB article that explicitly uses xp_msver from FoxPro and states that SQL Server 7.0 and greater always returns Unicode for the stored procedure, but this is not always the case. Also, since xp_msver it is a stored procedure sp_help and sp_columns aren't of any use here.
In all honesty, I would prefer using SERVERPROPERTY(), but it is not supported in SQL Server 7.0, which is a requirement. I would prefer not to overcomplicate the code by having different queries for different versions of SQL Server. Also, using ##version is not a good option, since it would require parsing the text, would be prone to bugs, and doesn't provide all the information I need.

nvarchar & varchar with Oracle & SQLServer

I need to upload some data from an Oracle table to a SQL Server table. The data will be uploaded to the SQL server using a Java processing utilising JDBC facilities.
Is there any benefit in creating the SQL server columns using nvarchar instead of varchar?
Google suggests that nvarchar is used when UNICODE characters are involved but i am wondering does nvarchar provide any benefit in this situation? (i.e. when the source data of the SQL Server table comes from an Oracle database running a Unix environment?)
Thanks in advance
As you have found out, nvarchar stores unicode characters - the same as nvarchar2 within Oracle. It comes down to whether your source data is unicode - or whether you anticipate having to store unicode values in future (e.g. internationalized software)

Does Access have any issues with unicode capable data types like nvarchar in SQL Server?

I am using Access 2003 as a front end UI for a SQL Server 2008 database. In looking at my SQL Server database design I am wondering if nvarchar was the right choice to use over varchar. I chose nvarchar because I thought it would be useful in case any characters represented by unicode needed to be entered. However, I didn't think about any possible issues with Access 2003 using the uni-code datatype. Are there any issues with Access 2003 working with unicode datatypes within SQL Server (i.e. nvarchar)? Thank you.
You can go ahead and use nvarchar, if that's the correct datatype for the job. Access supports Unicode data, both with it's own tables and with external (linked) tables and direct queries.

Resources