rsProcessingError - Reporting Services Error - rsErrorReadingNextDataRow - sql-server

I have ran into a strange issue in one of my SSRS reports. I get the following error:
"An error has occurred during report processing. (rsProcessingAborted)
cannot read the next data row for the dataset Defect_Summary. (rsErrorReadingNextDataRow)"
Whenever I run this dataset's query in query designer, or in management studio, the query runs fine. However, when I run the report in report builder or on the server I get the above error. After researching I have found that the issue has something to do with my parameter.
I have a parameter #PO (Production Order) where the user will provide an 8 digit number i.e. 11002575. In my query I have the following line: OrderNr / 10000 = #PO. In the database, OrderNr is of type bigint and will have a value such as 110025750020. I divide this number by 10000 so that the 8 digit #PO parameter equals the OrderNr such as 11002575.00 = 11002575. I used to use LEFT(OrderNr, 8), but found it slowed down the query so this division has worked better for me.
Anyway, here's the strange part: When I first encountered this error, and after researching, I changed my parameter to type integer (from text). This fixed the problem temporarily and the report ran fine. Then I encountered it again, so I changed the type back to text, and again, it fixed the problem temporarily and the report ran fine. I keep going back and forth with this temporary fix and have not been able to find a permanent resolution to this error, it just keeps coming back after a while of working and then all I know to do is keep going back and forth from integer to text. Can anyone help me resolve this error permanently?

Related

Processing Interactive Grid manually through PL/SQL and keeps throwing out an error

Used this site https://community.oracle.com/thread/3937159?start=0&tstart=0 to learn how to manually process interactive grids. I got it to work on a small table with 3 columns, but when I tried to get it to work for a bigger table, it keeps throwing this error:
PL/SQL: numeric or value error: character string buffer too small for.
I tried only updating 1 column and converting the datatype to the correct one, and it is not going away.
this message usually means you're trying to store 'AAAA' into a column that only accepts 1, 2 or 3 chars, like varchar2(3).
Make sure your columns have a proper limit size for the data you're processing.

Select query produces SqlDateTime overflow on valid dates

I have a problem: on a simple select like SELECT * FROM table SqlDateTime overflow error is randomly returned (few times it works OK after that error is returned; after that it again works few times and after that error is returned again) - error occurs on the same row (while using the same connection) - if I open and close MGMT Studio, error occurs on different row.
Exact error message is:
An error occured while executing batch. Error message is: SqlDateTime
overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59
PM.
Table has 3 DateTime columns:
DTcolumn1 - can be null, without default value
DTcolumn2 - must not be null, default value ('1800-01-01')
DTcolumn3 - can be null, without default value
Values in all 3 DateTime columns look fine (null or inside of the allowed interval).
Table also has some other columns of varchar and other types. It is more likely that select query will fail more often if I add order by one of those 3 DateTime columns (empiricaly tested).
Collation of the database is Slovenian_CI_AI.
What is causing this error (as I said - DateTime values seem to be OK)?
Thank you in advance!
EDIT 1 (2016-05-09): I forgot to mention it previously: error happens in SQL MGMT Studio and from code (using LINQ to SQL).
EDIT 2 (2016-05-10): It seems there is different problem - on every table with more than let's say 10000 records it throws some silly error. On some other table it throws:
An error occurred while executing batch. Error message is: Internal
connection fatal error.`
It also disconnects me from database (in the bottom status row it says disconnected). SQL server is installed on remote server inside of local network.
Our admin found out, that the problem is with DNE LightWeight Filter. If this monster is disabled everything works as it is supposed to (no random disconnects with strange errors).
You can disable it if you go to Control Panel / Network and Sharing center / Change addapter settings. Right click on your network device and select Properties. Deselect DNE LightWeight Filter.
Link to Server Fault, where I posted the question when we started to believe that this is network related problem.

PhpStorm 10 strange database query execution warning

I started using PhpStorm 10 and I faced strange database query behaviour (Sybase IQ), namely I'm getting error:
[010P6] 010P6: A row was received and ignored. java.io.IOException: JZ0EM: End of data.
But query executes and I'm getting relevant result.
010P6 - A row was received and ignored.
Description: An unexpected object of type 0xD1 was encountered in the result set being processed and was ignored. Action: Check the query that generated the result set and correct as required.
Can somebody explain what should I correct? The only guess I have is that it comes long char data in the output.

SQL Server error - Operand type clash: ntext is incompatible with int - (I'm not even using "ntext")

One of the columns I'm operating on is:
Comments VARCHAR(8000)
So basically I'm trying to insert a large text of up to even 600 characters into this column Comments. When I run my script everything goes smoothly for first 10 rows then all of a sudden I get this error:
pypyodbc.DataError: ('22018', '[22018] [Microsoft][ODBC SQL Server
Driver][SQL Server]Operand type clash: ntext is incompatible with
int')
Then again if I rerun the query, everything will start going smoothly for next 10 rows and as you might have guessed by now I get the same error again.
What can I do to fix this?
Edit:
I have tried using VARCHAR(MAX), NVARCHAR(MAX), VARCHAR(800), TEXT. I get the same error every time.
I wonder whether it is a problem with the data on row 10?
To test this, try deleting the data on, say, row 5, and see whether the error starts on row 9.
I would recommend you using pmyssql instead of pypyodbc. Seems like it is a driver level issue and switching to pymssql might help. Please follow the ACOM doc and let me know if that helps. If you still run into the same issue let me know as I can further try to help you.

Why is this data type conversion failing but not failing?

Ok, so I'm wracking my brain on this one...
These two queries... though they appear the same... are apparently different in some fashion. When run against a database in SQL Server Management Studio the top one results in an error (Conversion failed when converting from a character string to uniqueidentifier.) where as the bottom one runs just fine. Any ideas as to why that would be?
SELECT CONVERT(UNIQUEIDENTIFIER,'459B621C-A49A-49Cl-900F-AB14D61841E2');
SELECT CONVERT(UNIQUEIDENTIFIER,'459B621C-A49A-49C1-900F-AB14D61841E2');
Could it be a character encoding issue?
Thanks
There is a difference. The first one uses an l, the second is 1.

Resources