Error altering table name MSSQL - sql-server

Wanted to see if I could get some help concerning altering and renaming tables in MSSQL Server 2008. I am getting an error message on the sp_name syntax and maybe I am doing something wrong?
-- Archives data existing one week ago and then recreates production table (registration data)
IF EXISTS (SELECT * FROM dbo.tab_reg13_old) DROP TABLE dbo.tab_reg13_old;
sp_rename 'dbo.tab_reg13', 'dbo.tab_reg13_old';
CREATE TABLE [dbo].[tab_reg13](
[badge] [nvarchar](255) NULL,
[firstname] [nvarchar](255) NULL,
[lastname] [nvarchar](255) NULL,
[degree] [nvarchar](255) NULL,
[title] [nvarchar](255) NULL,
[company] [nvarchar](255) NULL,
[address1] [nvarchar](255) NULL,
[address2] [nvarchar](255) NULL,
[city] [nvarchar](255) NULL,
[state] [nvarchar](255) NULL,
[zipcode] [nvarchar](255) NULL,
[country] [nvarchar](255) NULL,
[email] [nvarchar](255) NULL,
[association] [nvarchar](255) NULL,
[regclass] [nvarchar](255) NULL,
[regtimestamp] [datetime] NULL
) ON [PRIMARY];
Getting this error message:
Msg 102, Level 15, State 1, Line 5
Incorrect syntax near 'sp_rename'.

Needs exec in front of proc call
exec sp_rename 'dbo.tab_reg13', 'dbo.tab_reg13_old';
you need to add exec in front of a proc if it is not the first statement in the batch

Related

Azure SQL Server: Insert 20m records between tables is slow (60+ mins)

I've been going through the boards and tried lots of different things without luck. So thought id reach out to the community directly.
The problem:
I have an Azure SQL Server DB that has 2 tables:
DATA_IMPORT (Source table I import data into via Data Factory...it gets truncated each load (approx 20m rows).
DATA_SOURCE (Table where I insert the 20m rows from DATA_IMPORT into with some simple transformation. This is expected to reach about 0.5b rows)
Im a little new to SQL Server and now resorted to having no indexes in DATA_SOURCE to see if that helps....still takes 60+mins.
No indexes are needed on table DATA_IMPORT, since its just a holding table.
Table Structures
CREATE TABLE [dbo].[DATA_IMPORT ](
[field1] [nvarchar](255) NOT NULL,
[field2] [nvarchar](255) NOT NULL,
[field3] [nvarchar](255) NOT NULL,
[field4] [nvarchar](255) NOT NULL,
[field5] [nvarchar](255) NOT NULL,
[field6] [nvarchar](255) NOT NULL,
[field7] [nvarchar](255) NOT NULL,
[field8] [nvarchar](255) NOT NULL,
[field9] [nvarchar](255) NOT NULL,
[field10] [nvarchar](255) NOT NULL,
[measure1] int NULL,
[measure2] decimal(10,2) NULL,
[measure3] decimal(10,5) NULL,
[measure4] decimal(7,2) NULL,
[measure5] decimal(10,5) NULL
)
CREATE TABLE [dbo].[DATA_SOURCE](
[EFF_DATE] [datetime] NOT NULL,
[EFF_STATUS] [nvarchar](255) NOT NULL,
[DATA_SOURCE] [nvarchar](255) NOT NULL,
[PERIOD] [date] NOT NULL,
[field1] [nvarchar](255) NOT NULL,
[field2] [nvarchar](255) NOT NULL,
[field3] [nvarchar](255) NOT NULL,
[field4] [nvarchar](255) NOT NULL,
[field5] [nvarchar](255) NOT NULL,
[field6] [nvarchar](255) NOT NULL,
[field7] [nvarchar](255) NOT NULL,
[field8] [nvarchar](255) NOT NULL,
[field9] [nvarchar](255) NOT NULL,
[field10] [nvarchar](255) NOT NULL,
[measure1] int NULL,
[measure2] decimal(10,2) NULL,
[measure3] decimal(10,5) NULL,
[measure4] decimal(7,2) NULL,
[measure5] decimal(10,5) NULL,
[measure6] decimal(11,3) NULL
[REC_CREATEDBY] [nvarchar](50) NOT NULL,
[REC_CREATEDON] [datetime] NOT NULL,
[REC_LASTUPDATEDBY] [nvarchar](50) NULL,
[REC_LASTUPDATEDON] [datetime] NULL
)
INSERT SQL
--YYYY-MM-DD
Declare #varPeriod varchar(30) = '2020-01-01'
Declare #varDataSource varchar(255) = 'https://blah.com'
INSERT INTO [DATA_SOURCE] (
[EFF_DATE],[EFF_STATUS],[DATA_SOURCE],[PERIOD],
[field1],[field2],[field3],[field4],[field5],
[field6],[field7],[field8],[field9],[field10],
[measure1],[measure2],[measure3],[measure4],[measure5],
[measure6],
[REC_CREATEDBY],[REC_CREATEDON], [REC_LASTUPDATEDBY], [REC_LASTUPDATEDON])
SELECT
SYSDATETIME() AS [EFF_DATE]
,'A' AS [EFF_STATUS]
,#varDataSource AS [DATA_SOURCE],
CONVERT(varchar, #varPeriod, 100) AS [PERIOD],
[field1],[field2],[field3],[field4],[field5],
[field6],[field7],[field8],[field9],[field10],
[measure1],[measure2],[measure3],[measure4],[measure5],
,CAST([measure1]*[measure2] AS numeric(11,3)) as [measure6]
,'DATA_LOADER' AS [REC_CREATEDBY]
,SYSDATETIME() AS [REC_CREATEDON]
,'DATA_LOADER' AS [REC_LASTUPDATEDBY]
,SYSDATETIME() AS [REC_LASTUPDATEDON]
FROM [dbo].[DATA_IMPORT];
GO
What performance recommendations do you have so I can insert these 20m rows quickly?
I will need to apply a 3/4 indexes too once I join to my dimensional data.
Thanks for your help all
Jay
EDIT: use BULK Insert for a better performance when inserting data.
PS: another important thing to look is the DTU / vCores you assign to your database.

sql server profiler 2014 "failed to open a table"

I'm trying to replay SQL Server 2014 Profiler trace that I saved to a DB table. When I open I get "Failed to open a table" error message. There is nothing in the windows logs.
I googled and this error used to happen when upgrading a SQL Server 2000 system to a 64 bit system. That doesn't apply here. I'm running on Windows Server 2012 with a fresh install of SQL Server 2014.
The trace was a TSQL_replay template. I saved it to a table using the following code. The code produced a table with the definition shown.
SELECT *
INTO myTrace
FROM ::fn_trace_gettable(N'c:\Logs\sql_trace_events.trc', default)
CREATE TABLE [dbo].[myTrace]
(
[TextData] [ntext] NULL,
[BinaryData] [image] NULL,
[DatabaseID] [int] NULL,
[TransactionID] [bigint] NULL,
[LineNumber] [int] NULL,
[NTUserName] [nvarchar](256) NULL,
[NTDomainName] [nvarchar](256) NULL,
[HostName] [nvarchar](256) NULL,
[ClientProcessID] [int] NULL,
[ApplicationName] [nvarchar](256) NULL,
[LoginName] [nvarchar](256) NULL,
[SPID] [int] NULL,
[Duration] [bigint] NULL,
[StartTime] [datetime] NULL,
[EndTime] [datetime] NULL,
[Reads] [bigint] NULL,
[Writes] [bigint] NULL,
[CPU] [int] NULL,
[Permissions] [bigint] NULL,
[Severity] [int] NULL,
[EventSubClass] [int] NULL,
[ObjectID] [int] NULL,
[Success] [int] NULL,
[IndexID] [int] NULL,
[IntegerData] [int] NULL,
[ServerName] [nvarchar](256) NULL,
[EventClass] [int] NULL,
[ObjectType] [int] NULL,
[NestLevel] [int] NULL,
[State] [int] NULL,
[Error] [int] NULL,
[Mode] [int] NULL,
[Handle] [int] NULL,
[ObjectName] [nvarchar](256) NULL,
[DatabaseName] [nvarchar](256) NULL,
[FileName] [nvarchar](256) NULL,
[OwnerName] [nvarchar](256) NULL,
[RoleName] [nvarchar](256) NULL,
[TargetUserName] [nvarchar](256) NULL,
[DBUserName] [nvarchar](256) NULL,
[LoginSid] [image] NULL,
[TargetLoginName] [nvarchar](256) NULL,
[TargetLoginSid] [image] NULL,
[ColumnPermissions] [int] NULL,
[LinkedServerName] [nvarchar](256) NULL,
[ProviderName] [nvarchar](256) NULL,
[MethodName] [nvarchar](256) NULL,
[RowCounts] [bigint] NULL,
[RequestID] [int] NULL,
[XactSequence] [bigint] NULL,
[EventSequence] [bigint] NULL,
[BigintData1] [bigint] NULL,
[BigintData2] [bigint] NULL,
[GUID] [uniqueidentifier] NULL,
[IntegerData2] [int] NULL,
[ObjectID2] [bigint] NULL,
[Type] [int] NULL,
[OwnerID] [int] NULL,
[ParentName] [nvarchar](256) NULL,
[IsSystem] [int] NULL,
[Offset] [int] NULL,
[SourceDatabaseID] [int] NULL,
[SqlHandle] [image] NULL,
[SessionLoginName] [nvarchar](256) NULL,
[PlanHandle] [image] NULL,
[GroupID] [int] NULL
)
I tried the same thing and I did not run into any issues. Have you tried with a new trace and save to a different named table?
You have to wait...the 'replay' is grayed out for about 1 minute until it fully loads the script.
Had the same issue and it turned out I was trying to open a trace recorded in Profiler 2014 with Profiler 2008 on a diffrent SQL instance in order to reply the trace. Upgrading profiler to 2014 on a replay instance solved the problem.
You have to create table of specific structure first. Try to export trace into a table from profiler and look what it created. Then just insert subset of columns into the table. Here is what I used for SQL 2012-2017:
------- Trace created with Replay template
USE [testdb]
GO
/****** Object: Table [dbo].[TraceTable] Script Date: 29-Oct-18 17:37:07 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[TraceTableSQL1]
(
[RowNumber] [int] IDENTITY ( 0 , 1 ) NOT NULL ,
[EventClass] [int] NULL ,
[BinaryData] [image] NULL ,
[DatabaseID] [int] NULL ,
[NTUserName] [nvarchar] ( 128 ) NULL ,
[NTDomainName] [nvarchar] ( 128 ) NULL ,
[HostName] [nvarchar] ( 128 ) NULL ,
[ClientProcessID] [int] NULL ,
[ApplicationName] [nvarchar] ( 128 ) NULL ,
[LoginName] [nvarchar] ( 128 ) NULL ,
[SPID] [int] NULL ,
[StartTime] [datetime] NULL ,
[EndTime] [datetime] NULL ,
[Error] [int] NULL ,
[DatabaseName] [nvarchar] ( 128 ) NULL ,
[RowCounts] [bigint] NULL ,
[RequestID] [int] NULL ,
[EventSequence] [bigint] NULL ,
[IsSystem] [int] NULL ,
[ServerName] [nvarchar] ( 128 ) NULL ,
[TextData] [ntext] NULL ,
[EventSubClass] [int] NULL ,
[Handle] [int] NULL ,
PRIMARY KEY CLUSTERED
(
[RowNumber] ASC
)
WITH ( PAD_INDEX = OFF , STATISTICS_NORECOMPUTE = OFF , IGNORE_DUP_KEY = OFF , ALLOW_ROW_LOCKS = ON , ALLOW_PAGE_LOCKS = ON ) ON [PRIMARY]
)
ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
INSERT [TraceTableSQL1]
SELECT
[EventClass] ,
[BinaryData] ,
[DatabaseID] ,
[NTUserName] ,
[NTDomainName] ,
[HostName] ,
[ClientProcessID] ,
[ApplicationName] ,
[LoginName] ,
[SPID] ,
[StartTime] ,
[EndTime] ,
[Error] ,
[DatabaseName] ,
[RowCounts] ,
[RequestID] ,
[EventSequence] ,
[IsSystem] ,
[ServerName] ,
[TextData] ,
[EventSubClass] ,
[Handle]
FROM sys.fn_trace_gettable ( N'd:\temp\profiler.trc' , DEFAULT )

Using exec() function when inserting to a table

When you have 2 tables table like this
CREATE TABLE #BranchType
(
[External_BranchTypeID] [uniqueidentifier] DEFAULT newsequentialid() NOT NULL,
[BranchTypeID] [smallint] identity(1,1) ,
[BranchTypeDescription] [nvarchar](20) NOT NULL,
[DateCreated] [datetime] NOT NULL,
[UserCreated] [nvarchar](20) NOT NULL,
[DateModified] [datetime] NULL,
[UserModified] [nvarchar](20) NULL,
[IsDeleted] [bit] NOT NULL,
)
CREATE TABLE BranchSubType
(
[External_BranchSubTypeID] [uniqueidentifier] DEFAULT newsequentialid() NOT NULL,
[BranchSubTypeID] [smallint] identity(1,1) ,
[BranchTypeID] [uniqueidentifier] NOT NULL,
[BranchSubTypeDescription] [nvarchar](30) NOT NULL,
[FinancialSystemTypeId] [smallint] NOT NULL,
[DateCreated] [datetime] NOT NULL,
[UserCreated] [nvarchar](20) NOT NULL,
[DateModified] [datetime] NULL,
[UserModified] [nvarchar](20) NULL,
[IsDeleted] [bit] NOT NULL,
)
How can you do an insert like the one below in SQL Server? I am trying to return the guid value
DECLARE #SQLCmd VARCHAR(max)
set #SQLCmd = 'SELECT External_BranchTypeID FROM #BranchType WHERE BranchTypeID =1'
INSERT INTO BranchSubType (BranchTypeID, BranchSubTypeDescription, BranchSubTypeId, DateCreated, UserCreated,IsDeleted)
VALUES ( exec(#SQLCmd), 'Normal',1, getdate(), 'System',0) --FROM #BranchType A WHERE A.BranchTypeID = 1
In this case you don't need to use EXEC
INSERT INTO BranchSubType
(BranchTypeID,
BranchSubTypeDescription,
BranchSubTypeId,
DateCreated,
UserCreated,
IsDeleted)
SELECT External_BranchTypeID,
'Normal',
1,
getdate(),
'System',
0
FROM #BranchType WHERE BranchTypeID =1

SQL Server BULK INSERT FROM different schemas

I have a database that can have data updated from two external parties.
Each of those parties sends a pipe delimited text file that is BULK INSERTED into the staging table.
I now want to change the scheme for one of the parties by adding a few columns, but this is unfortunately breaking the BULK INSERT for the other party even though the new columns are all added as NULLABLE.
Is there any obvious solution to this?
TABLE SCHEMA:
CREATE TABLE [dbo].[CUSTOMER_ENTRY_LOAD](
[CARD_NUMBER] [varchar](12) NULL,
[TITLE] [varchar](6) NULL,
[LAST_NAME] [varchar](34) NULL,
[FIRST_NAME] [varchar](40) NULL,
[MIDDLE_NAME] [varchar](40) NULL,
[NAME_ON_CARD] [varchar](26) NULL,
[H_ADDRESS_PREFIX] [varchar](50) NULL,
[H_FLAT_NUMBER] [varchar](5) NULL,
[H_STREET_NUMBER] [varchar](10) NULL,
[H_STREET_NUMBER_SUFFIX] [varchar](5) NULL,
[H_STREET] [varchar](50) NULL,
[H_SUBURB] [varchar](50) NULL,
[H_CITY] [varchar](50) NULL,
[H_POSTCODE] [varchar](4) NULL,
[P_ADDRESS_PREFIX] [varchar](50) NULL,
[P_FLAT_NUMBER] [varchar](5) NULL,
[P_STREET_NUMBER] [varchar](10) NULL,
[P_STREET_NUMBER_SUFFIX] [varchar](5) NULL,
[P_STREET] [varchar](50) NULL,
[P_SUBURB] [varchar](50) NULL,
[P_CITY] [varchar](50) NULL,
[P_POSTCODE] [varchar](4) NULL,
[H_STD] [varchar](3) NULL,
[H_PHONE] [varchar](7) NULL,
[C_STD] [varchar](3) NULL,
[C_PHONE] [varchar](10) NULL,
[W_STD] [varchar](3) NULL,
[W_PHONE] [varchar](7) NULL,
[W_EXTN] [varchar](5) NULL,
[DOB] [smalldatetime] NULL,
[EMAIL] [varchar](50) NULL,
[DNS_STATUS] [bit] NULL,
[DNS_EMAIL] [bit] NULL,
[CREDITCARD] [char](1) NULL,
[PRIMVISACUSTID] [int] NULL,
[PREFERREDNAME] [varchar](100) NULL,
[STAFF_NUMBER] [varchar](50) NULL,
[CUSTOMER_ID] [int] NULL,
[IS_ADDRESS_VALIDATED] [varchar](50) NULL
) ON [PRIMARY]
BULK INSERT STATEMENT:
SET #string_temp = 'BULK INSERT customer_entry_load FROM '+char(39)+#inpath
+#current_file+'.txt'+char(39)+' WITH (FIELDTERMINATOR = '+char(39)+'|'+char(39)
+', MAXERRORS=1000, ROWTERMINATOR = '+char(39)+'\n'+char(39)+')'
SET DATEFORMAT dmy
EXEC(#string_temp)
The documentation describes how to use a format file to handle the scenario where the target table has more columns than the source file. An alternative that can sometimes be easier is to create a view on the table and BULK INSERT into the view instead of the table; this possibility is described in the same documentation.
And please always mention your SQL Server version.
Using OPENROWSET with BULK allows you to use your file in a query. You can use that to format the data and select only the columns you need.
In the end I have handled the two different cases with two different BULK INSERT statements (depending on which file is being processed). It seems like there isn't a way to do what I was trying to do with one statement.
You could use the format file idea supplied by #Pondlife.
Adapt your insert dynamically based on the input file name (provided there are unique differneces between the external parties). Using a CASE statement, simply select the correct format file based on the unique identifier in the file name.
DECLARE #formatFile varchar (max);
Set #formatFile =
CASE
WHEN #current_file LIKE '%uniqueIdentifier%'
THEN 'file1'
ELSE 'file2'
END
SET #string_temp = 'BULK INSERT customer_entry_load FROM '+char(39)+#inpath
+#current_file+'.txt'+char(39)+' WITH (FORMATFILE = '+char(39)+#formatFile+char(39)
')'
SET DATEFORMAT dmy
EXEC(#string_temp)
Hope that helps!

Error 4866 when bulk inserting data from csv

I'm trying to load data from a csv file and keep getting these errors. Am I missing some params in the bulk insert script or do I need to modify the file before I attempt this?
Msg 4866, Level 16, State 1, Line 1
The bulk load failed. The column is too long in the data file for row 1, column 54. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
Here's the script
BULK
INSERT BrowseNotes
FROM 'C:\Users\Jarek\browseNotes2.csv'
WITH
(
FIELDTERMINATOR = ','
, ROWTERMINATOR = '\n'
)
Here are sample rows from the file, I delete out the first row before attempting to load. The rows end with ",/n". I've tried replacing /n with /r/n and removing the last comma. Still get the same error.
LoanType,Maturity,LoanClass,Borrower,LoanStatus,TimeLeftBeforeExpiration,MonthlyPayment,LoanMaturity,JobTenureYearsString,AmountToInvest,AmountMissingToClose,NumberOfPayments,Id,State,Type,Status,Aid,Amount,Duration,StartD,IntRate,Grade,Purpose,HousingStatus,JobTenure,Income,CreditClassId,City,UnfundedAmnt,Fico,OpenCreditLines,TotalCreditLines,Inq6Months,RevolvUtil,FundedPercentage,FundedAmount,EmpStatus,JobTitle,AppDate,AppAmount,Employer,DelinquentAmount,EarliestCreditLine,PubRecords,DTI,AppExpiration,LapStatus,IncomeVStatus,CreditReportD,RevolvCreditBal,AccntsNowDelinquent,Delinquencies2Yrs,MnthsSinceLastDelinquency,MnthsSinceLastRecord
PERSONAL,60,C4,1248804,INFUNDING,279589,344.62,Year5,8 years,0,625.0,60,1020047,PA,1,1,1248804,13775.0,60,2011-11-11 11:40:18,0.1527,C,debt_consolidation,MORTGAGE,96,50000.0,124,PHILADELPHIA,625.0,679-713,10,21,2,62.2,0.9565972222222222,13775.0,EMPLOYED,"Quality Assurance Manager",2011-11-11 11:40:18,14400.0,"J. Ambrogi Food Distribution",0.0,01/27/2003,0,23.14,2011-11-25 11:40:18,APPROVED_CR,NOT_REQUIRED,11/11/2011,22906.0,0,0,null,null,
PERSONAL,60,A5,1247389,INFUNDING,180323,289.94,Year5,3 years,0,1975.0,60,1018925,FL,1,1,1247389,12025.0,60,2011-11-10 08:05:52,0.089,A,house,MORTGAGE,36,150000.0,105,orange park,1950.0,750-779,9,25,0,62.9,0.8607142857142858,12050.0,EMPLOYED,"Project Manager",2011-11-10 08:05:52,14000.0,"Scientific Research Corp.",0.0,10/01/1984,0,14.02,2011-11-24 08:05:52,APPROVED_CR,VERIFIED,11/09/2011,43069.0,0,0,null,null,
Here's the table I'm trying to load to
CREATE TABLE [dbo].[BrowseNotes](
[LoanType] [nvarchar](25) NULL,
[Maturity] [tinyint] NULL,
[LoanClass] [nvarchar](2) NULL,
[Borrower] [int] NULL,
[LoanStatus] [nvarchar](25) NULL,
[TimeLeftBeforeExpiration] [int] NULL,
[MonthlyPayment] [smallmoney] NULL,
[LoanMaturity] [nvarchar](10) NULL,
[JobTenureYearsString] [nvarchar](15) NULL,
[AmountToInvest] [smallmoney] NULL,
[AmountMissingToClose] [smallmoney] NULL,
[NumberOfPayments] [tinyint] NULL,
[Id] [int] NULL,
[State] [char](2) NULL,
[Type] [tinyint] NULL,
[Status] [tinyint] NULL,
[Aid] [int] NULL,
[Amount] [smallmoney] NULL,
[Duration] [tinyint] NULL,
[StartD] [datetime] NULL,
[IntRate] [decimal](18, 0) NULL,
[Grade] [char](1) NULL,
[Purpose] [nvarchar](25) NULL,
[HousingStatus] [nvarchar](25) NULL,
[JobTenure] [tinyint] NULL,
[Income] [money] NULL,
[CreditClassId] [smallint] NULL,
[City] [nvarchar](255) NULL,
[UnfundedAmnt] [smallmoney] NULL,
[Fico] [nvarchar](10) NULL,
[OpenCreditLines] [tinyint] NULL,
[TotalCreditLines] [tinyint] NULL,
[Inq6Months] [tinyint] NULL,
[RevolvUtil] [decimal](18, 0) NULL,
[FundedPercentage] [decimal](18, 0) NULL,
[FundedAmount] [smallmoney] NULL,
[EmpStatus] [nvarchar](25) NULL,
[JobTitle] [nvarchar](255) NULL,
[AppDate] [datetime] NULL,
[AppAmount] [money] NULL,
[Employer] [nvarchar](255) NULL,
[DelinquentAmount] [money] NULL,
[EarliestCreditLine] [datetime] NULL,
[PubRecords] [tinyint] NULL,
[DTI] [decimal](18, 0) NULL,
[AppExpiration] [datetime] NULL,
[LapStatus] [nvarchar](25) NULL,
[IncomeVStatus] [nvarchar](25) NULL,
[CreditReportD] [datetime] NULL,
[RevolvCreditBal] [money] NULL,
[AccntsNowDelinquent] [tinyint] NULL,
[Delinquencies2Yrs] [tinyint] NULL,
[MnthsSinceLastDelinquency] [nvarchar](10) NULL,
[MnthsSinceLastRecord] [nvarchar](10) NULL
)
What database is the table in? Try fully qualifying your table name i.e.
`mydb.dbo.BrowseNotes`
Though it certainly sounds like its not recognizing the ROWTERMINATOR .
I know this is coming in waaaaaay late, but I figured out how to do this.
DECLARE #sql varchar(1000)
set #sql = '
BULK
INSERT BrowseNotes
FROM "C:\Users\Jarek\browseNotes2.csv"
WITH (
FIELDTERMINATOR = ",",
ROWTERMINATOR = "' + char(10) + '"
)'
exec(#sql)
GO
This script works by forcing the rowterminator to a literal '0A' (linefeed). This works for both \r\n and \n terminated data.
I would also suggest using a pipe character (or anything not contained in your data) for a fieldterminator. BULK INSERT is not very tolerant of embedded field terminators in the data.
Also, adding FIRSTROW to the statement does not skip field validation for the first row. So you have to strip the headers before import, not just skip them.

Resources