Table name : sample
Table structure :
ID int
NAME varchar(30)
IPADDRESS varbinary(16)
mysql query :
load data concurrent local infile 'C:\test.txt' into table sample fields terminated by ',' LINES TERMINATED BY '\r\n' (ID,NAME,#var3) set IPADDRESS = inet_pton(#var3)
SQL Server equivalent query :
??
using bcp will be appreciated..
Thanks in advance..
Here is an article which you will find useful:
How do I load text or csv file data into SQL Server?
It was the second result from Google when searching for "bcp load file"
EDIT:
You might be able to do your import in two steps. Load the rows from the file in to a temp table then apply a function to convert the IP strings to the binary format.
Have a look at this question on SO: Datatype for storing ip address in SQL Server
Related
I am migrating my data from SQL Server to Hive using following steps but there is data issue with the resulting table. I tried various options including checking datatype, Using csvSerde but not able to get data aligned properly in respective columns. I followed following steps:
Export SQL Server data to flat file with fields separated by comma.
Create external table in Hive as given below and load data.
CREATE EXTERNAL TABLE IF NOT EXISTS myschema.mytable (
r_date timestamp
, v_nbr varchar(12)
, d_account int
, d_amount decimal(19,4)
, a_account varchar(14)
)
row format delimited
fields terminated by ','
stored as textfile;
LOAD DATA INPATH 'gs://mybucket/myschema.db/mytable/mytable.txt' OVERWRITE INTO TABLE myschema.mytable;
There is issue with data with all combination I could try.
I also tried OpenCSVSerde but the result was worse than simple text file. I also tried by changing delimiter to semicolon but no luck.
row format serde 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
with serdeproperties ( "separatorChar" = ",") stored as textfile
location 'gs://mybucket/myschema.db/mytable/';
Can you please suggest some robust approach so that I don't have to deal with data issue.
Note: Currently I don't have option of connecting my SQL Server table with Sqoop.
I'm using Sqoop to Import data from SQL Server into Hive and then later Export that data out of Hive into another SQL Server. The Sqoop Import works fine and converts VCHAR/NVARCHAR data type into String.
My question is what is the best column type to define on the Target table, since Hive now currently holds the data type as String? I originally defined most my columns on the Target table as VARCHAR(100) and it has been working, but now some String it failed during the Export and I get:
SQL State: 22001, error code: 8152
"java.sql.BatchUpdateException: String or binary data would be
truncated."
Sample string failed:
"HEALTH SITE PROVIDERS LLC"|" "|"3435673"|"UHGID0000547777"|"906225"|"\\N"|"\\N"|"\\N"
Clearly this data has far less characters than 100 for each column (column delimited by |), So I'm confused as to how Hive/Sqoop is converting this String or does it do any conversion at all during the Export?
I was thinking of defining my columns in the Target table as NVARCHAR(max) but is this a bit extreme? Also I need to have some columns Index as well and NVARCHAR(max) isn't allowed in SQL Server.
Regards,
Since you mostly data is of type VARCHAR(100). There is no need to store it is Hive's STRING. You can save VARCHAR and NVARCHAR in Hive's VARCHAR.
Use --map-column-hive <column-name,hive-type....> in your sqoop import command.
Example:
Say col1 is VARCHAR(100) and col2 is NVARCHAR(100)
--map-column-hive col1='varchar(100)',col2='varchar(100)',....
Now you can export it back to SQL Server table having columns VARCHAR/NVARCHAR.
We have an Oracle table that contains archived data in the table format:
BLOB | BLOBID
Each BLOB is an XML file that contains all the business objects we need.
Every BLOB needs to be read, the XML parsed and put into 1 SQL Server table that will hold all data.
The table has 5 columns R | S | T | D | BLOBID
Sample XML derived from BLOB:
<N>
<E>
<R>33</R>
<S>1</S>
<T>2012-01-25T09:48:43.213</T>
<D>6.9534619e+003</D>
</E>
<E>
<R>33</R>
<S>1</S>
<T>2012-01-25T09:48:45.227</T>
<D>1.1085871e+004</D>
</E>
<E>
<R>33</R>
<S>1</S>
<T>2012-01-25T09:48:47.227</T>
<D>1.1561764e+004</D>
</E>
</N>
There are a few million BLOBs and we want to avoid copying all the data over as an XML column then to a table, instead we want to go BLOB to table in one step.
What is the best approach to doing this with SSIS/SQL Server?
The code below almost does what we are looking for but only in Oracle Developer and only for one BLOB:
ALTER SESSION SET NLS_TIMESTAMP_FORMAT='yyyy-mm-dd HH24:MI:SS.FF';
SELECT b.BLOBID, a.R as R, a.S as S, a.T as T, cast(a.D AS float(23)) as D
FROM XMLTABLE('/N/E' PASSING
(SELECT xmltype.createxml(BLOB, NLS_CHARSET_ID('UTF16'), null)
FROM CLOUD --Oracle BLOB Cloud
WHERE BLOBID = 23321835)
COLUMNS
R int PATH 'R',
S int PATH 'S',
T TIMESTAMP(3) PATH 'T',
D VARCHAR(23) PATH 'D'
) a
Removing WHERE BLOBID = 23321835 gives the following error ORA-01427: single-row subquery returns more than one row since there are millions of BLOBS. Even so is there a way to run this through SSIS? Adding the SQL to the OLE DB Source did not work for pulling the data from Oracle even for 1 BLOB and would result in errors.
Using SQL Server 2012 and Oracle 10g
To summarize, how would we go from a Oracle BLOB containing XML to SQL Server table with business objects derived from XML with SSIS?
I'm new to working with Oracle, any help would be greatly appreciated!
Update:
I was able to get some of my code to work in SSIS by modifying the Oracle Source in SSIS to use the SQL command code above minus the first line,
ALTER SESSION SET NLS_TIMESTAMP_FORMAT='yyyy-mm-dd HH24:MI:SS.FF';
SSIS doesn't like this line.
Error message with the ALTER SESSION line above included:
No column information was returned by the SQL Command
Would there be another way to format the date without losing data? I'll try experimenting more, possible using varchar(23) for the date instead of timestamp.
I need to generate an SQL insert script to copy data from one SQL Server to another.
So with .net, I'm reading the data a given SQL Server table and write this to a new text file which can then be executed in order to insert this data on other databases.
One of the columns is a VARBINARY(MAX).
How should and can I transform the obtained byte[] into text for the script so that it can still be inserted on the other databases?
SSMS shows this data as hex string. Is this the format to use?
I can get this same format with the following
BitConverter.ToString(<MyByteArray>).Replace("-", "")
But how can this be inserted again?
I tried
CONVERT(VARBINARY(MAX), "0xMyHexString")
This does an insert, but the value is not the same as in the source table.
It turned out you can just directly insert the hex string, no need to convert anything:
INSERT TableName (VarBinColumnName)
VALUES (0xMyHexString)
Just don't ask why I didn't test this directly...
There are two questions on SO that may help:
What is the fastest way to get varbinary data from SQL Server into a C# Byte array?
and
How Do I Insert A Byte[] Into an SQL Server VARBINARY column?
I have a SQL Server 2000 with a table containing an image column.
How do I insert the binary data of a file into that column by specifying the path of the file?
CREATE TABLE Files
(
FileId int,
FileData image
)
I believe this would be somewhere close.
INSERT INTO Files
(FileId, FileData)
SELECT 1, * FROM OPENROWSET(BULK N'C:\Image.jpg', SINGLE_BLOB) rs
Something to note, the above runs in SQL Server 2005 and SQL Server 2008 with the data type as varbinary(max). It was not tested with image as data type.
If you mean using a literal, you simply have to create a binary string:
insert into Files (FileId, FileData) values (1, 0x010203040506)
And you will have a record with a six byte value for the FileData field.
You indicate in the comments that you want to just specify the file name, which you can't do with SQL Server 2000 (or any other version that I am aware of).
You would need a CLR stored procedure to do this in SQL Server 2005/2008 or an extended stored procedure (but I'd avoid that at all costs unless you have to) which takes the filename and then inserts the data (or returns the byte string, but that can possibly be quite long).
In regards to the question of only being able to get data from a SP/query, I would say the answer is yes, because if you give SQL Server the ability to read files from the file system, what do you do when you aren't connected through Windows Authentication, what user is used to determine the rights? If you are running the service as an admin (God forbid) then you can have an elevation of rights which shouldn't be allowed.