Can't import as null value SQL Server 2008 TSV file - sql-server

I import data from a TSV file with SQL Server 2008.
null is replaced by 0 when I confirm a table after import with integer column.
How to import as null, please Help me!!

Using bcp, -k switch
Using BULK INSERT, use KEEPNULLS
After comment:
Using SSIS "Bulk insert" task, options page, "Keep nulls" = true
This is what the import wizard uses: but you'll have to save and edit it first because I see no option in my SSMS 2005 wizard.

This can be set in the OLE DB Destination editor....there is a 'Keep nulls' option.

Alternative for those using the Import and Export Wizard on SQL Server Express, or anyone who finds themselves too lazy to modify the SSIS package:
Using text editing software before you run the wizard, replace NULLs with a valid value that you know doesn't appear in your dataset (eg. 987654; be sure to do a search first!) and then run the Import Export Wizard normally. If your data contains every single value (maybe bits or tinyints), you'll have some data massaging ahead of you, but it's still possible by using a temporary table with datatypes that can store a greater number of values. Once it's in SQL, use commands like
UPDATE TempTable
SET Column1 = NULL
WHERE Column1 = 987654
to get those NULLs where they belong. If you've used a temporary table, use INSERT INTO or MERGE to get your data into your end table.

Related

After Insert Trigger never fires using Import Wizard

Using Import Wizard I tried importing data into tables tcc_Block and PROJECT_IDENTITY.
These 2 table structures already exist in SQL Server and they are related via the ProjectID columns. PROJECT_IDENTITY has ProjectID as PK and tcc_Block as FK key.
Any time I import tables, the ProjectID in the parent table is created and incremented, but the one in the child table is always NULL.
The trigger never fires!?
ALTER TRIGGER [dbo].[InsertTest]
ON [dbo].[tcc_Block]
AFTER INSERT
AS
BEGIN
DECLARE #proj int;
SELECT #proj = MAX(ProjectID)
FROM PROJECT_IDENTITY;
UPDATE tcc_Block
SET ProjectID = #proj
WHERE ProjectID IS NULL;
END;
GO
Bulk inserts generally do not fire triggers, unless it is explicitly set (see FIRE_TRIGGERS) option. If this package is edited in SSIS (Import Export Wizard generates SSIS packages), you can select the Fire triggers option on the Bulk Insert Task Editor Options page. But Import Export Wizard does not expose a way to set this option. You can save the package to file system and edit it in Visual Studio to enable this option, or you can export the data as flat files and import them with BULK INSERT command, specifying FIRE_TRIGGERS option.
I had a similar issue with a SQL 2008 Export that I saved as a dtsx package in File System. So, I opened the package in my SSMS 2016 and it opens as an xml file. I found the "FastLoadOptions" property: property id="72" name="FastLoadOptions".
Then scrolled to the end of the line where the options are entered and added "FIRE_TRIGGERS":
TABLOCK,CHECK_CONSTRAINTS,FIRE_TRIGGERS /property>.
Saved it and reloaded into my job step (don't forget to check the "Use 32 Bit runtime" on the Execution option tab). And it works great.

How to overcome truncation error?

I am trying to use the import and export wizard to move a small data set from a CSV file to an existing (empty) table. I did Script Table As > Create To, to get all DML for this table. I know the field type of the two fields which are causing problems is varchar(50). I'm getting this error message:
Error 0xc020902a: Data Flow Task 1: The "Source - Reconciliation_dbo_agg_boc_consolidated_csv.Outputs[Flat File Source Output].Columns["ReportScope"]" failed because truncation occurred, and the truncation row disposition on "Source - Reconciliation_dbo_agg_boc_consolidated_csv.Outputs[Flat File Source Output].Columns["ReportScope"]" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
(SQL Server Import and Export Wizard)
The max length of all characters is 49, so I'm not sure why SQL Server is complaining about truncation. Is there any way to disable this error check and just force it to work? It should work as-is! Thanks everyone.
Is there any way to disable this error check and just force it to
work? It should work as-is! Thanks everyone.
Yes. If you're using the wizard, you can view the table schema before running it, and check the option to ignore truncation.
The max length of all characters is 49, so I'm not sure why SQL Server
is complaining about truncation.
The default datatype of source column may be Text while using import wizard, so change it to varchar(50) using advanced tab of source. Check this link for more details.
For the safe side can you please check column data type in both Source and Destination. If both are not same just declare all your columns as varchar inside table with some maximum length say for example varchar(max) or varchar(500) and see what would be the result.
Change max length of Varchar column:
ALTER TABLE YourTable ALTER COLUMN YourColumn VARCHAR (500);
Then the column will default to allowing nulls even if it was originally defined as NOT NULL. i.e. omitting the specification in an ALTER TABLE ... ALTER COLUMN is always treated as.
ALTER TABLE YourTable ALTER COLUMN YourColumn VARCHAR (500) NULL;
check column nullable or Not nullable based on requirement just change it.
Use below steps for better understanding How to import a CSV file into a database using SQL Server Management Studio:
While bulk copy and other bulk import options are not available on the SQL servers, you can import a CSV formatted file into your database using SQL Server Management Studio.
First, create a table in your database into which you will import the CSV file. After the table is created:
Log in to your database using SQL Server Management Studio.
Right click the database and select Tasks -> Import Data...
Click the Next > button.
For Data Source, select Flat File Source. Then use the Browse button to select the CSV file. Spend some time configuring the data import before clicking the Next > button.
For Destination, select the correct database provider (e.g. for SQL Server 2012, you can use SQL Server Native Client 11.0). Enter the Server name; check Use SQL Server Authentication, enter the User name, Password, and Database before clicking the Next > button.
In the Select Source Tables and Views window, you can Edit Mappings before clicking the Next > button.
Check Run immediately and click the Next > button.
Click the Finish button to run the package.

Import from MSSQL to Excel when the data contains embedded line breaks

I am trying to export a sql query (MS SQL 2014) into excel 2010. The problem is column values contain line breaks, so the remaining data gets copied to the next line in excel. Is there a way to get rid of this? Keeping the column as is? or maybe encapsulating the column so the sql considers it as one column and ignores the line breaks?
Here is my SQL Query:
select * from tbl_case
where (casenature not like '%<strong>%'
and casenature not like '%<br />%'
and casenature like '%from:%')
and userid in (select employeelogin from tbl_employees where riding='15010')
Works fine if I use the normal way to import data from MSSQL to Excel which is: in Excel, Data->From other sources->SQL server.
To import data resulting from an arbitrary SQL query:
At the last step of the wizard (where you select the range), press Properties...
In the resulting Connection properties window:
Definition->Command type - SQL
In the Command text field, write your query
You can replace enter keys with space in select statement, and then export to Excel

Import datatype DECIMAL from CSV to SQL server

When importing a file.csv to sql server 2008 I am getting a problem.
In my file.csv the decimal is written in this way: (ex. 1234,34112) and It seems that SQL server does not understand the ',' as decimal.
My solution has been to import it using BULK INSERT as VARCHAR and after that convert it to decimal. It works but I guess it may be a better solution which I am not able to get.
Could you help me with that?
Thanks in advance
There are only two ways of doing it one you have already mentioned importing in Sql server and then doing something like this..
CREATE Table ImportTable (Value NVARCHAR(1000)) --<-- Import the data as it is
INSERT INTO #TABLE VALUES
('1234,34112'),('12651635,68466'),('1234574,5874')
/* Alter table add column of NUMERIC type.
And update that column something like this...
*/
UPDATE ImportTable
SET NewColumn = CAST(REPLACE(Value , ',', '.') AS NUMERIC(28,8))
Or you can change it your excel sheet before you import it.
Unless you are using SSIS to import data it is always best to get your data in Sql Server 1st using lose datatypes and then do any data manipulation if needed.
SQL Server Management Studio 17 provides a new direct option to import flat files that handles import of decimal csv columns for you. Right your database, then click Tasks > Import Flat File...

How to import csv files

How can I import CSV file data into SQL Server 2000 table? I need to insert data from CSV file to table twice a day. Table has more then 20 fields but I only need to insert value into 6 fields.
i face same problem before i can suggest start reading here. The author covers:"This is very common request recently – How to import CSV file into SQL Server? How to load CSV file into SQL Server Database Table? How to load comma delimited file into SQL Server? Let us see the solution in quick steps."
I need to insert data from CSV file to table twice a day.
Use DTS to perform the import, then schedule it.
For SQL 2000, I would use DTS. You can then shedule this as a job when your happy with it.
Below is a good Microsoft link explaining how to use it.
Data Transformation Services (DTS)
You describe two distinct problems:
the CSV import, and
the extraction of data into only those 6 fields.
So break your solution down into two steps:
import the CSV into a raw staging table, and
then insert into your six 'live' fields from that staging table.
There is a function for the first part, called BULK INSERT, the syntax looks like this:
BULK INSERT target_staging_table_in_database
FROM 'C:\Path_to\CSV_file.csv'
WITH
(
DATAFILETYPE = 'CHAR'
,FIRSTROW = 2
,FIELDTERMINATOR = ','
,ROWTERMINATOR = '\n'
);
Adjust to taste, and consult the docs for more options. You might also want to TRUNCATE or DELETE FROM your staging table before doing the bulk insert so you don't have any old data in there.
Once you get the information into the database, doing an UPDATE or INSERT into those six fields should be straightforward.
You can make of use SQL Server Integration services(SSIS). It's jusy one time task to create the Package. Next time onwards just run that package.
You can also try Bulk Insert as daniel explained.
You can also try Import export wizard in SQL Server 2000.

Resources