I have a simple Perl script that connects to a MS SQL Server 10.50 via freetds. It runs a single query, something like SELECT name FROM table. The table has about 15000 records. I do prepare-execute(no binds)-fetch(in a while loop). Prepare, execute passes OK, fetch loops over about 300 records then hangs and eventually comes back with the "Read from the server failed". In details:
DBD::Sybase::st fetchrow_array failed: OpenClient message: LAYER = (0) ORIGIN = (0) SEVERITY = (78) NUMBER = (36)
Server ....., database ...
Message String: Read from the server failed
The freetds.conf has "tds version" set to 4.2. When I try using 7.0, 7.1 or 7.2, the script doesn't even get past "execute" step.
If I change query to limit results like SELECT TOP 200 name FROM table, it finishes just fine.
Has anyone seen anything like it?
Related
I currently am trying to integrate a trigger into my sql code.
However, I am facing an issue where integrating the trigger
yields connection issues and breaks any future queries while
using that sourced database. I am using MariaDB.
This is what I have.
/* TRIGGERS */
DELIMITER |
CREATE TRIGGER max_trials
BEFORE INSERT ON Customer_Trials
FOR EACH ROW
BEGIN
DECLARE dummy INT DEFAULT 0;
IF NOT (SELECT customer_id
FROM Active_Customers
WHERE NEW.customer_id = customer_id)
THEN
SET #dummy = 1;
END IF;
END |
DELIMITER ;
I source a file which contains all of this code.
When trigger uncommented and I source (the table will not exist),
I get this output
MariaDB [(none)]> SOURCE db.sql;
Query OK, 0 rows affected, **1 warning** (0.000 sec)
Query OK, 1 row affected (0.000 sec)
Database changed
Query OK, 0 rows affected (0.028 sec)
Query OK, 0 rows affected (0.019 sec)
...
...
**ERROR 2013 (HY000) at line 182 in file: 'db.sql': Lost connection to MySQL server during query**
Notice that a warning is produced at the top
and an error is produced at the bottom. Now let's look at the
warning:
MariaDB [carpets]> SHOW WARNINGS;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
ERROR: Can't connect to the server
In the above snippets, you see a warning and an error, but
referring to the issue of a loss of connection, for some
reason that I do not understand.
Let's look at the other case.
When I drop the database and reload with trigger commented out,
I recieve the following result:
MariaDB [(none)]> SOURCE carpet.sql;
Query OK, 9 rows affected (0.042 sec)
Query OK, 1 row affected (0.000 sec)
Database changed
Query OK, 0 rows affected (0.018 sec)
...
...
Query OK, 0 rows affected (0.003 sec)
I do not receive any issues. From this, it appears that the
trigger is causing an issue which prevents defined functionality.
I cannot insert or do much after I have generated the error, for every
query after will result in a connection error.
Having just got my hands on triggers, would anyone happen to have
an idea of what is going on here?
I'm currently synchronizing data between Maria <--> MSSQL. That is 2 way sync.
I used SQL Server on Windows and everything works well until several days before... I switched all the test DB to Linux server, so MSSQL was run on a Docker container (the official image).
My Env
MSSQL Docker image
Ubuntu (MacOS also), the CPU and RAM requirement was ensured both for device and Docker.
My problem:
The SQL Agent Job ran perfectly for ~10 minutes. After that, no changes were captured into cdc.dbo_MyTrackedTable_CT.
I want this CDC job will run forever.
I got this message:
Executed as user: 1b23b4b8a3ec\1b23b4b8a3ec$.
Maximum stored procedure, function, trigger, or view nesting level exceeded (limit 32). [SQLSTATE 42000] (Error 217)
My inspection
EXEC msdb.dbo.sp_help_job
#job_name = N'cdc.MyDBName_capture',
#job_aspect = N'ALL' ;
Return: last_outcome_message
The job failed. The Job was invoked by User sa. The last step to run
was step 2 (Change Data Capture Collection Agent).
.
Next, take a further inspection:
SELECT
job.*, '|' as "1"
, activity.*, '|' as "2", history.*
, CASE
WHEN history.[run_status] = 0 THEN 'Failed'
WHEN history.[run_status] = 1 THEN 'Succeeded'
WHEN history.[run_status] = 2 THEN 'Retry (step only)'
WHEN history.[run_status] = 3 THEN 'Canceled'
WHEN history.[run_status] = 4 THEN 'In-progress message'
WHEN history.[run_status] = 5 THEN 'Unknown'
ELSE 'N/A' END as Run_Status
FROM msdb.dbo.sysjobs_view job
INNER JOIN msdb.dbo.sysjobactivity activity ON job.job_id = activity.job_id
INNER JOIN msdb.dbo.sysjobhistory history ON job.job_id = history.job_id
WHERE 1=1
AND job.name = 'cdc.MyDBName_capture'
AND history.run_date = '20180122'
Return:
See this SQL result Image
Sorry I don't have enough repu to embed img, so the link instead.
As you can see the CDC job will start and run..and..retry for 10 times, after 10times, I cannot capture changes anymore.
I need to start the job again by:
EXEC msdb.dbo.sp_start_job N'cdc.MyDbName_capture';
Then foreach ~1 minutes, the job retry --> till 10 --> job was stopped ¯_(ツ)_/¯
So can you tell me why and how to fix it ??
FYI, This is my job configuration:
-- https://learn.microsoft.com/en-us/sql/relational-databases/system-stored-procedures/sys-sp-cdc-add-job-transact-sql
EXECUTE sys.sp_cdc_change_job
#job_type = N'capture',
#maxscans = 1,
#maxtrans = 500,
#continuous = true,
#pollinginterval = 1
;
It's also not a trigger issue right? I feel dangerous when trying to turn trigger off, but no luck was made.
-- Turn off recursion trigger
ALTER DATABASE MyDBName
SET RECURSIVE_TRIGGERS OFF;
I am experiencing the same thing on SQL server 2017 running under windows server. Interestingly the same system has been running for no problems for years under SQL2012 so I am thinking it may be some bug introduced along the way.
I found with some retries the problem cleared so as a workaround I edited the job and increased the number of retries and haven't seen it again yet.
See KB4073684 - FIX: Change data capture does not work in SQL Server 2017.
This happens due to a declared bug:
Microsoft has confirmed that this is a problem in the Microsoft products...
Fix is included in Cumulative Update 4 for SQL Server 2017. However, they recommend simply installing the latest cumulative version:
Each new build for SQL Server 2017 contains all the hotfixes and security fixes that were in the previous build. We recommend that you install the latest build for SQL Server 2017.
I'm currently working on a project to migrate code base from using Advantage Database Server to SQL Server.
I'm using Firedac of XE8 linked to a Microsoft SQL Server 2014 Express.
I have a small test project. There's a TDBGrid showing the content of a table (the query lock mode is Pessimistic, lockpoint immediate).
I have another TQuery with a SQL command like this:
update myTable
set firstName = 'John'
where id = 1
What I do :
I put the first row in Edit mode (by writing something in the cell)
When I press a button, it runs executeSQL on the Update query
Nothing happens -- the update query does not go through
That's fine ... but I was expecting an error message telling me the UPDATE didn't go trough...
How can I get the same behavior but with an error message triggered ?
Essential connection settings to work with row locks :
TFDConnection.UpdateOptions.Lockmode := lmPessimistic;
TFDConnection.UpdateOptions.LockPoint := lmImmediate;
TFDConnection.UpdateOptions.LockWait := False;
Behaviour described is SQL Server waiting for the lock to be removed to finish commiting the UPDATE. By setting your FireDACconnection to 'no wait' it's going to raise an exception as soon as you attempt to do something on the row you've locked by putting the dataset in Edit. Then you can catch this exception to do what you want.
I'm trying to copy a table in SQL Server, but a simple statement seems to be locking my database when using pyodbc. Here's the code I'm trying:
dbCxn = db.connect(cxnString)
dbCursor = dbCxn.cursor()
query = """\
SELECT TOP(10) *
INTO production_data_adjusted
FROM production_data
"""
dbCursor.execute(query)
The last statement returns immediately, but both LINQPad and SQL Server Management Studio are locked out of the database afterwards (I try to refresh their table lists). Running sp_who2 shows that LINQPad/SSMS are stuck waiting for my pyodbc process. Other databases on the server seem fine, but all access to this database gets held up. The only way I can get these other applications to resolve their stalls is by closing the pyodbc database connection:
dbCxn.close()
This exact same SELECT ... INTO statement statement works fine and takes only a second from LINQPad and SSMS. The above code works fine and doesn't lock the database if I remove the INTO line. It even returns results if I add fetchone() or fetchall().
Can anyone tell me what I'm doing wrong here?
Call the commit function of either the cursor or connection after SELECT ... INTO is executed, for example:
...
dbCursor.execute(query)
dbCursor.commit()
Alternatively, automatic commit of transactions can be specified when the connection is created using autocommit. Note that autocommit is an argument to the connect function, not a connection string attribute, for example:
...
dbCxn = db.connect(cxnString, autocommit=True)
...
I would like to retrieve the statistics information returned by SQL Server when using SET STATISTICS IO ON through a JDBC interface.
Getting the execution plan is pretty simple because after running SET SHOWPLAN_XML ON the result of a call to Statement.execute() will be the execution plan. When using SET STATISTICS XML OFF a second ResultSet is returned by the Statement instance.
However when running SET STATISTICS IO ON before using Statement.execute() only the query results are returned. No further ResultSets, no Warnings nothing.
Does anyone have a clue how I can get that information? Where it might be hidden?
I tried using jTDS as well as Microsoft's JDBC driver (3.0 and 4.0) against SQL Server 2005, SQL Server 2008R2 and SQL Server 2012.
I checked all ResultSets returned by the query (checked by using Statement.getMoreResults()), as well as the Warning objects returned by the Connection.getWarnings() and Statement.getWarnings().
Contrary to SET SHOWPLAN_XML ON that change the resultset of any subsequent statement to the plan instead of the result of the query, SET STATISTICS IO ON doesn't.
SET STATISTICS IO ON let the query executes and displays the stats as a message.
See http://msdn.microsoft.com/en-us/library/ms131350.aspx
In ADO.NET, there is a event on the SqlConnection object called InfoMessage on which you can plug a handler and get any message the server displays, as prints or IO stats for example.
I have quickly looked at MS JDBC driver for SQL Server and didn't found anything close, best I found was this : Is there a way to display PRINT results with SQL server JDBC driver?.
My Java knowledge is thin and there could be something similar in an other driver although the above link on SQL Server Message Results only mention "SQL Server Native Client OLE DB provider".
After looking a bit more, I found out that you can get the message runing a trace (with the profiler) and that the message appears as a trace message under the "User Error Message" EventClass, which contains the TransactionID you can use to relate it to your batch.
There is a default trace running in the background of SQL Server.
See http://www.simple-talk.com/sql/performance/the-default-trace-in-sql-server---the-power-of-performance-and-security-auditing/ and http://www.sqlservercentral.com/articles/sql+server+2005/64547/ and you might insert the event you need in the trace, or run an other one then read from it.
Though to be honnest, I hope you find a better solution.
EDIT :
I have no clue why you would be able to get print message but not IO stats with the JDBC function but I can propose something else, in the direction I started with the trace.
You can do that with extended events, put a handle on the right event, and read the trace after with something like this :
First execute this on the DB :
CREATE EVENT SESSION QueryStats ON SERVER
ADD EVENT sqlserver.error_reported
(
ACTION(sqlserver.sql_text)
WHERE (severity = 10
AND state = 0
AND user_defined = 0
AND error = 3615)
)
ADD TARGET package0.ring_buffer
WITH(max_dispatch_latency = 1 seconds)
GO
then around your statement :
ALTER EVENT SESSION QueryStats ON SERVER STATE = START
SET STATISTICS IO ON
select * from MyTable -- your statement(s)
SET STATISTICS IO OFF
WAITFOR DELAY '00:00:01'; -- you have to wait because you can't set under 1 second of max_dispatch_latency
WITH QueryStats
AS (
SELECT CAST(target_data AS xml) AS SessionData
FROM sys.dm_xe_session_targets st
INNER JOIN sys.dm_xe_sessions s ON s.address = st.event_session_address
WHERE name = 'QueryStats'
)
SELECT
error.value('(#timestamp)[1]', 'datetime') as event_timestamp
,error.value('(action/value)[1]', 'nvarchar(max)') as sql_text
,error.value('(data/value)[5]', 'varchar(max)') as [message]
FROM QueryStats d
CROSS APPLY SessionData.nodes ('//RingBufferTarget/event') AS t(error)
ALTER EVENT SESSION QueryStats ON SERVER STATE = STOP
GO
And there you get a second result set with your IO stats.
Still, the solution is far from final, because there would be a need to remove the wait time and to scope the trace better, which might be possible.
You also might let the trace run and get all the statements / IO stats later in time depending on what you intend to do with those.