Select query produces SqlDateTime overflow on valid dates - sql-server

I have a problem: on a simple select like SELECT * FROM table SqlDateTime overflow error is randomly returned (few times it works OK after that error is returned; after that it again works few times and after that error is returned again) - error occurs on the same row (while using the same connection) - if I open and close MGMT Studio, error occurs on different row.
Exact error message is:
An error occured while executing batch. Error message is: SqlDateTime
overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59
PM.
Table has 3 DateTime columns:
DTcolumn1 - can be null, without default value
DTcolumn2 - must not be null, default value ('1800-01-01')
DTcolumn3 - can be null, without default value
Values in all 3 DateTime columns look fine (null or inside of the allowed interval).
Table also has some other columns of varchar and other types. It is more likely that select query will fail more often if I add order by one of those 3 DateTime columns (empiricaly tested).
Collation of the database is Slovenian_CI_AI.
What is causing this error (as I said - DateTime values seem to be OK)?
Thank you in advance!
EDIT 1 (2016-05-09): I forgot to mention it previously: error happens in SQL MGMT Studio and from code (using LINQ to SQL).
EDIT 2 (2016-05-10): It seems there is different problem - on every table with more than let's say 10000 records it throws some silly error. On some other table it throws:
An error occurred while executing batch. Error message is: Internal
connection fatal error.`
It also disconnects me from database (in the bottom status row it says disconnected). SQL server is installed on remote server inside of local network.

Our admin found out, that the problem is with DNE LightWeight Filter. If this monster is disabled everything works as it is supposed to (no random disconnects with strange errors).
You can disable it if you go to Control Panel / Network and Sharing center / Change addapter settings. Right click on your network device and select Properties. Deselect DNE LightWeight Filter.
Link to Server Fault, where I posted the question when we started to believe that this is network related problem.

Related

rsProcessingError - Reporting Services Error - rsErrorReadingNextDataRow

I have ran into a strange issue in one of my SSRS reports. I get the following error:
"An error has occurred during report processing. (rsProcessingAborted)
cannot read the next data row for the dataset Defect_Summary. (rsErrorReadingNextDataRow)"
Whenever I run this dataset's query in query designer, or in management studio, the query runs fine. However, when I run the report in report builder or on the server I get the above error. After researching I have found that the issue has something to do with my parameter.
I have a parameter #PO (Production Order) where the user will provide an 8 digit number i.e. 11002575. In my query I have the following line: OrderNr / 10000 = #PO. In the database, OrderNr is of type bigint and will have a value such as 110025750020. I divide this number by 10000 so that the 8 digit #PO parameter equals the OrderNr such as 11002575.00 = 11002575. I used to use LEFT(OrderNr, 8), but found it slowed down the query so this division has worked better for me.
Anyway, here's the strange part: When I first encountered this error, and after researching, I changed my parameter to type integer (from text). This fixed the problem temporarily and the report ran fine. Then I encountered it again, so I changed the type back to text, and again, it fixed the problem temporarily and the report ran fine. I keep going back and forth with this temporary fix and have not been able to find a permanent resolution to this error, it just keeps coming back after a while of working and then all I know to do is keep going back and forth from integer to text. Can anyone help me resolve this error permanently?

issues importing .tsv to sql Server

I am trying to simply import a .tsv (200 column, 400,000 rows) to Sql Server.
I get this error all the time (always with a different column):
Error 0xc02020a1: Data Flow Task 1: Data conversion failed. The data conversion for column "Column 93" returned status value 4 and status text "Text was truncated or one or more characters had no match in the target code page.".
Even though I explicitly:
So, I found myself, going back, and changing the Output (500 for this case):
Is there a way to change all OutputColumnWidth to like ‘max’ at once?! I have 200 columns I can't wait for it to fail and go back and change it for each failed column... (I do not care about performance, any data type is the same for me)
You could try opening the code view of your SSIS package and doing a ctrl-H replace of all "50" to "500". If you have 50's you don't want to change to 500 then look at the code and make the replacement more context-specific.

SQL Server 2016 R Services: sp_execute_external_script returns 0x80004005 error

I run some R code after querying 100M records and get the following error after the process runs for over 6 hours:
Msg 39004, Level 16, State 19, Line 300
A 'R' script error occurred during execution of 'sp_execute_external_script'
with HRESULT 0x80004005.
HRESULT 0x80004005 appears to be associated in Windows with Connectivity, Permissions or an "Unspecified" error.
I know from logging in my R code that the process never reaches the R script at all. I also know that the entire procedure completes after 4 minutes on a smaller number of records, for example, 1M. This leads me to believe that this is a scaling problem or some issue with the data, rather than a bug in my R code. I have not included the R code or the full query for proprietary reasons.
However, I would expect a disk or memory error to display a 0x80004004 Out of memory error if that were the case.
One clue I noticed in the SQL ERRORLOG is the following:
SQL Server received abort message and abort execution for major error : 18
and minor error : 42
However the time of this log line does not coincide with the interruption of the process, although it does occur after it started. Unfortunately, there is precious little on the web about "major error 18".
A SQL Trace when running from SSMS shows the client logging in and logging out every 6 minutes or so, but I can only assume this is normal keepalive behaviour.
The sanitized sp_execute_external_script call:
EXEC sp_execute_external_script
#language = N'R'
, #script = N'#We never get here
#returns name of output data file'
, #input_data_1 = N'SELECT TOP 100000000 FROM DATA'
, #input_data_1_name = N'x'
, #output_data_1_name = N'output_file_df'
WITH RESULT SETS ((output_file varchar(100) not null))
Server Specs:
8 cores
256 GB RAM
SQL Server 2016 CTP 3
Any ideas, suggestions or debugging hints would be greatly appreciated!
UPDATE:
Set TRACE_LEVEL=3 in rlauncher.config to turn on a higher level of logging and re-ran the process. The log reveals a cleanup process that ran, removing session files, at the time the entire process failed after 6.5 hours.
[2016-05-30 01:35:34.419][00002070][00001EC4][Info] SQLSatellite_LaunchSatellite(1, A187BC64-C349-410B-861E-BFDC714C8017, 1, 49232, nullptr) completed: 00000000
[2016-05-30 01:35:34.420][00002070][00001EC4][Info] < SQLSatellite_LaunchSatellite, dllmain.cpp, 223
[2016-05-30 08:04:02.443][00002070][00001EC4][Info] > SQLSatellite_LauncherCleanUp, dllmain.cpp, 309
[2016-05-30 08:04:07.443][00002070][00001EC4][Warning] Session A187BC64-C349-410B-861E-BFDC714C8017 cleanup wait failed with 258 and error 0
[2016-05-30 08:04:07.444][00002070][00001EC4][Info] Session(A187BC64-C349-410B-861E-BFDC714C8017) logged 2 output files
[2016-05-30 08:04:07.444][00002070][00001EC4][Warning] TryDeleteSingleFile(C:\PROGRA~1\MICROS~1\MSSQL1~1.MSS\MSSQL\EXTENS~1\MSSQLSERVER06\A187BC64-C349-410B-861E-BFDC714C8017\Rscript1878455a2528) failed with 32
[2016-05-30 08:04:07.445][00002070][00001EC4][Warning] TryDeleteSingleDirectory(C:\PROGRA~1\MICROS~1\MSSQL1~1.MSS\MSSQL\EXTENS~1\MSSQLSERVER06\A187BC64-C349-410B-861E-BFDC714C8017) failed with 32
[2016-05-30 08:04:08.446][00002070][00001EC4][Info] Session A187BC64-C349-410B-861E-BFDC714C8017 removed from MSSQLSERVER06 user
[2016-05-30 08:04:08.447][00002070][00001EC4][Info] SQLSatellite_LauncherCleanUp(A187BC64-C349-410B-861E-BFDC714C8017) completed: 00000000
It appears the only way to allow my long-running process to continue is to:
a) Extend the Job Cleanup wait time to allow the job to finish
b) Disable the Job Cleanup process
I have thus far been unable to find the value that sets the Job Cleanup wait time in the MSSQLLaunchpad service.
While a JOB_CLEANUP_ON_EXIT flag exists in rlauncher.config, setting it to 0 has no effect. The service seems to reset it to 1 when it is restarted.
Again, any suggestions or assistance would be much appreciated!
By default, SQL Server reads all data into R memory as a Data Frame before starting execution of R script. Based on the fact that the script works with 1M rows and fails to start with 100M rows, this could potentially be an Out of Memory error. To resolve memory issues, (other than increasing memory on machine/reducing data size) you can try one of these solutions
Increase memory allocation for R process execution using sys.resource_governor_external_resource_pools max_memory_percent setting. By default, SQL Server limits R process execution to 20% of memory.
Streaming execution for R script instead of loading all data into memory. Note that this parameter can only be used in cases where the output of the R script doesn’t depend on reading or looking at the entire set of rows.
The Warnings in RLauncher.log about data cleanup happened after the R script execution can be safely ignored and probably not the root cause for the failures you are seeing.
Unable to resolve this issue in SQL, I simply avoided the SQL Server Launchpad service which was interrupting the processing and pulled the data from SQL using the R RODBC library. The pull took just over 3 hours (instead of 6+ using sp_execute_external_procedure).
This might implicate the SQL Launchpad service, and suggests that memory was not the issue.
Please try your scenario in SQL Server 2016 RTM. There have been many functional and performance fixes made since CTP3.
For more information on how to get the SQL Server 2016 RTM checkout SQL Server 2016 is generally available today blogpost.
I had almost the same issue with SQL Server 2016 RTM-CU1. My Query failed with error 0x80004004 instead of 0x80004005. And it failed beginning with 10,000,000 records, but that could be related to only having 16 GB memory and/or different data.
I got around it by using a field list instead of "*". Even if the field list contains all the fields from the data source (a rather complicated view in my case), a query featuring a field list is always successful, while "SELECT TOP x * FROM ..." always fails for some large x.
I've had the a similar error (0x80004004), and the problem was that one of the rows in one of the columns contained a "very" special character (I'm saying "very" because other special characters did not cause this error).
So that when I replaced 'Folkelånet Telefinans' with 'Folkelanet Telefinans', the problem went away.
In your case, maybe at least one of the values in the last 99M rows contains something like that character, and you just have to replace it. I hope that Microsoft will resolve this issue at some point.

SQL Server error - Operand type clash: ntext is incompatible with int - (I'm not even using "ntext")

One of the columns I'm operating on is:
Comments VARCHAR(8000)
So basically I'm trying to insert a large text of up to even 600 characters into this column Comments. When I run my script everything goes smoothly for first 10 rows then all of a sudden I get this error:
pypyodbc.DataError: ('22018', '[22018] [Microsoft][ODBC SQL Server
Driver][SQL Server]Operand type clash: ntext is incompatible with
int')
Then again if I rerun the query, everything will start going smoothly for next 10 rows and as you might have guessed by now I get the same error again.
What can I do to fix this?
Edit:
I have tried using VARCHAR(MAX), NVARCHAR(MAX), VARCHAR(800), TEXT. I get the same error every time.
I wonder whether it is a problem with the data on row 10?
To test this, try deleting the data on, say, row 5, and see whether the error starts on row 9.
I would recommend you using pmyssql instead of pypyodbc. Seems like it is a driver level issue and switching to pymssql might help. Please follow the ACOM doc and let me know if that helps. If you still run into the same issue let me know as I can further try to help you.

SQL Server 2008 - Find out on which data row the error (divide by zero) appeared

i have a query running on a large amount of data in SSMS. After about 20 minutes of execution, the query completes with an error 'Divide by zero' (some results are already returned).
It would be helpful to know on which data the error appeared, i.e. find the id/row nuber on which the error can be reproduced.
The query itself is rather complicated, so i am not going to post it; the question is more technical - is there a log somewhere, or another way to find the problematic row(s)?
Select
*,
Dividend / NULLIF(Divisor, 0) as NullIfBollixedNow
FROM
MyTable
-- to isolate the rows(s). Without this, you get NULL instead of error in the output
WHERE
Dividend / NULLIF(Divisor, 0) IS NULL
In SQL Server there isn't an easy way to do this as Ordering or other functions may affect the row order after the Put in some exception handling into your procedure using case statements and output an exception value on the broken rows. Something like this
If you query was
Select MyFirstField / MySecondField as TestMath
From myTblOfFacts
Select
Case when MySecondField > 0 then MyFirstField / MySecondField
Else -1
End as TestMath
From myTblOfFacts
THis would return -1 for the rows with an issue.

Resources