fetching master table data, getting error - sql-server

i want to get text values from a master table corresponding to a string (which is comma seperated string of master table userid column) stored in another table
i am trying as
select maritialtype from tblmastermaritialstatus where MaritalStatusId in(select MaritalStatusId from tblPartnerBasicDetail where userid=1)
maritalstatusid in tblPartnerBasicDetail is a string like 1,2,3
i am getting error
Msg 245, Level 16, State 1, Line 1 Conversion failed when converting
the varchar value '1,2,3' to data type tinyint.
how to resolve it

Comma seperated nvarchar data is not the same as comma seperated integers.
You are doing something similar to:
WHERE 1 IN ("1,2,3")
1 is an integer, "1,2,3" is a string (which cannot be implicitly converted). Therefore you are getting an error.
I would recommend normalising your data so that there is no need for comma seperated values.
In the long run this will save you a lot of issues.
However, if you wish to stick with CSV, you may find this article helpful:
http://www.nigelrivett.net/SQLTsql/InCsvStringParameter.html
Check the fn_ParseCSVString part specifically

Related

Select text column from table in SQL Server stored procedure

I am having difficulty figuring this out. I have an incident table that contains columns id, comments, incidentdate, and incidentdescID. There are 10 years worth of data in this table. I wrote a stored procedure to extract the last 4 years worth of data but I am running into the following error.
Msg 8152, Level 16, State 10, Line 27
String or binary data would be truncated.
So when I change the date range for the incident to be between 2015 to 2016 I am not getting an error. Then when I change it to be between 2017-2018 I am still not getting an error. But when I change it to be between 2016-2017 I get the error. Also when I comment out the comments column, I do not get an error no matter what date range I put.
So I was thinking there might be a special character in the Comments column which is a text column in the Incident table. If that is the case how would I be able to select that column but remove the special characters in the stored procedure without making changes to the table?
If you suspect your "Comments" column is the culprit then you can search my friend for junk values. I got this error once and fixed by replacing char(10) and char(13) by blanks.
1.
SELECT REPLACE(REPLACE(tbl.comments, CHAR(10), '*JUNK*'), CHAR(13), '*JUNK*') AS CleandComments
FROM [your table name] tbl
Copy your query result into any editor and search for records corresponding JUNK keywords.
This ideally happens when you are importing data from excels or source tables with NVARCHAR datatype whereas your destination is a CSV or accepts only VARCHAR.
If above is your case then you simply need to put REPLACE function on your column/s in your procedure

Error converting nvarchar to numeric while inserting records

Good Day,
I am trying to insert records from csv file into my database table. Problem is in inserting alphanumeric values.
My column datatype is set to NUMERIC(19,0), in this column I am expecting some numeric values to be inserted from. For some specific reasons I am getting alphanumeric values in my csv file. For example:
I am getting value: GBS1182000945008.
My goal here is to remove those three characters and cast the remaining string as Numeric and get it inserted inside my table.
So far I have tried:
CAST((select substring(?,4,30)) AS NUMERIC)
But, I am still getting that annoying error, I cannot just ignore those values by using TRY_CONVERT as I do need those records in my database. What am I missing here?
Edit: I have tested this code separately and it is working as expected, only problem is in using it while inserting values. What I have done is that, I checked whether the given parameter is numeric or not, if it is I am just inserting the param if not then I am converting that param into numeric.
So here is my whole scenario:
If (SELECT ISNUMERIC(?)) = 1 {
// Just insert the parameter as:
Insert INTO table (NUMERIC_FIELD) VALUE(?)
}
ELSE {
Insert INTO table (NUMERIC_FIELD) VALUE(CAST((select substring(?,4,30)) AS NUMERIC))
}
Here ? represents the value from CSV.
Try AS NUMERIC(19,0) instead of AS NUMERIC
Also, please note you can have 30 digits in the extracted substring (it will not fit your 19 digis of the column datatype.

Bulk Load Data Conversion Error - Can't Find Answer

For some reason I keep receiving the following error when trying to bulk insert a CSV file into SQL Express:
Bulk load data conversion error (type mismatch or invalid character for the
specified codepage) for row 2, column 75 (Delta_SM_RR).
Msg 4864, Level 16, State 1, Line 89
Bulk load data conversion error (type mismatch or invalid character for the
specified codepage) for row 3, column 75 (Delta_SM_RR).
Msg 4864, Level 16, State 1, Line 89
Bulk load data conversion error (type mismatch or invalid character for the
specified codepage) for row 4, column 75 (Delta_SM_RR).
... etc.
I have been attempting to insert this column as both decimal and numeric, and keep receiving this same error (if I take out this column, the same error appears for the subsequent column).
Please see below for an example of the data, all data points within this column contain decimals and are all rounded after the third decimal point:
Delta_SM_RR
168.64
146.17
95.07
79.85
60.52
61.03
-4.11
-59.57
1563.09
354.36
114.78
253.46
451.5
Any sort of help or advice would be greatly appreciated as it seems that a number of people of SO have come across this issue. Also, if anyone knows of another automated way to load a CSV into SSMS, that would be a great help as well.
Edits:
Create Table Example_Table
(
[Col_1] varchar(255),
[Col_2] numeric(10,5),
[Col_3] numeric(10,5),
[Col_4] numeric(10,5),
[Col_5] date,
[Delta_SM_RR] numeric(10,5),
)
GO
BULK INSERT
Example_Table
FROM 'C:\pathway\file.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n',
FIRSTROW = 2
);
Table Schema - This is a standalone table (further calculations and additional tables are built off of this single table, however at the time of bulk insert it is the only table)
It's likely that your data has an error in it. That is, that there is a character or value that can't be converted explicitly to NUMERIC or DECIMAL. One way to check this and fix it is to
Change [Delta_SM_RR] numeric(10,5) to [Delta_SM_RR] nvarchar(256)
Run the bulk insert
Find your error row: select * from Example_Table where [Delta_SM_RR] like '%[^-.0-9]%'
Fix the data at the source, or delete from Example_Table where [Delta_SM_RR] like '%[^-.0-9]%'
The last statements returns/deletes rows where there is something other than a digit, period, or hyphen.
For your date column you can follow the same logic above, by changing the column to VARCHAR, and then find your error by using ISDATE() to find the ones which can't be converted.
I'll bet anything there is some weird character in your data set. Open your data set in Notepad++ and view the data. Any aberration should become apparent very quickly! The problem is coming from Col75 and it's affecting the first several rows, and thus everything that comes after that also fails to load.
Make sure that .csv is not using text qualifiers and that none of your fields in the .csv have a comma inside the desired value.
I am struggling with this issue right now. The issue is that I have a 68 column report I am trying to import.
Column 17 is a "Description" column that has a double quote text qualifier on top of the comma delimitation.
Bulk insert with a comma field terminator won't identify the double quote text qualifier and munge all of the data to the right of the offending column.
It looks like to overcome this, you need to create a .fmt file to instruct the Bulk Insert which columns it needs to treat as simple delimited, and which columns it needs to treat as delimited and qualified (see this answer).

Handling embedded new lines when creating/selecting External Tables in SQL Data Warehouse

In SQL Data Warehouse (editors please don't change this, it is the actual name see: here) I have a JobCandidate_ext external table that looks like this.
CREATE EXTERNAL TABLE [HumanResources].[JobCandidate_ext](
[JobCandidateID] int,
[BusinessEntityID] int,
[Resume] Varchar(8000),
[ModifiedDate] Datetime
)
WITH (
LOCATION='/[HumanResources].[JobCandidate]/data.txt',
DATA_SOURCE=AzureStorage,
FILE_FORMAT=TextFile)
GO
The column [Resume] was an XML type in SQL Server but in SQL Data Warehouse XML types should be converted to varchar(8000) as described here.
I am using a flat file data.txt to export the data to a blob and then create an external table from it.
The [Resume] column has carriage returns in it (as expected from an XML file), and so when you run a SELECT * FROM [HumanResources].[JobCandidate_ext] you get an error. In this case:
Query aborted-- the maximum reject threshold (0 rows) was reached while reading from an external source: 1 rows rejected out of total 2 rows processed.
(/[HumanResources].[JobCandidate]/data.txt)Column ordinal: 0, Expected data type: INT, Offending value: some text .... (Column Conversion Error), Error: Error converting data type NVARCHAR to INT.
I know that I cannot configure a row delimiter when creating external tables as described here.
The row delimiter must be UTF-8 and supported by Hadoop’s LineRecordReader. The row delimiter must be either '\r', '\n', or '\r\n'. These are not user-configurable.
And if you try to put quotes on each column field you get this error while selecting rows from the external table: No closing string delimiter.
Query aborted-- the maximum reject threshold (0 rows) was reached while reading from an external source: 1 rows rejected out of total 1 rows processed.
(/[HumanResources].[JobCandidate]/data.txt)Column ordinal: 2, Expected data type: VARCHAR(8000) collate SQL_Latin1_General_CP1_CI_AS, Offending value: 'ShaiBassli (Tokenization failed), Error: No closing string delimiter.
Is there a way to get around this issue?
Today, PolyBase does not allow for row or field delimiters inside fields i.e. it does not allow you to escape these characters. As Greg pointed out, you can vote for this functionality here: https://feedback.azure.com/forums/307516-sql-data-warehouse/suggestions/10600132-polybase-allow-line-ends-within-qualified-text-f
To workaround this limitation, you can either pre-process the data (using sed or tr for example) to replace unwanted characters before reading it with PolyBase. Or you can switch to other polybase supported file formats RCFile/ORC/Parquet to avoid dealing with row and field delimiters completely.

SQL import via RODBC - wrong values after implicit? typecast

I have a column in an SQL table which contains 15 digit IDs stored as nvarchar (255) in MSSQL (e.g. '30000005000008498').
If I run an sql query on this using the robdc library the data is implictly casted to numeric.
library("RODBC")
odbcChannel <- odbcConnect("TableName")
ID <- sqlQuery(odbcChannel, "SELECT DISTINCT [ID] FROM TEST4")
I have verified this via
str(ID)
Next thing I have tried is to cast the data to a character using
ID <- as.character(ID)
This works without getting an error message. Unfortunately, parts of the data is altered which is kind of bad for a unique ID:
Minimum Example:
a = 30000005000008498
b <- as.character(a)
output is:
[1] "30000005000008496"
I think it might have something to do with the maximum size of numeric. For smaller numbers, as.character works just fine. However, I could not figure out how to keep the initial ID when importing from SQL.
Question1: Is there any possibility to avoid the implicit typcast to num?
Question2: Any ideas how i can import the 15 digit character string from SQL without R changing it?
Use as.is = TRUE.
testid <- sqlQuery(database,"SELECT CAST(id as CHAR) as id from my_table", as.is=TRUE);
Even if the column id numeric in the database, testid will be a data frame containing character inputs.
I think that as.is can be set for each column separately (using as.is = c(..)) or for all at the same time.
Probably the CAST(.. as CHAR) is not necessary when the column already is of type VARCHAR.

Resources