Trigger generating ERROR 2006 (HY000), connection time outs - database

I currently am trying to integrate a trigger into my sql code.
However, I am facing an issue where integrating the trigger
yields connection issues and breaks any future queries while
using that sourced database. I am using MariaDB.
This is what I have.
/* TRIGGERS */
DELIMITER |
CREATE TRIGGER max_trials
BEFORE INSERT ON Customer_Trials
FOR EACH ROW
BEGIN
DECLARE dummy INT DEFAULT 0;
IF NOT (SELECT customer_id
FROM Active_Customers
WHERE NEW.customer_id = customer_id)
THEN
SET #dummy = 1;
END IF;
END |
DELIMITER ;
I source a file which contains all of this code.
When trigger uncommented and I source (the table will not exist),
I get this output
MariaDB [(none)]> SOURCE db.sql;
Query OK, 0 rows affected, **1 warning** (0.000 sec)
Query OK, 1 row affected (0.000 sec)
Database changed
Query OK, 0 rows affected (0.028 sec)
Query OK, 0 rows affected (0.019 sec)
...
...
**ERROR 2013 (HY000) at line 182 in file: 'db.sql': Lost connection to MySQL server during query**
Notice that a warning is produced at the top
and an error is produced at the bottom. Now let's look at the
warning:
MariaDB [carpets]> SHOW WARNINGS;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (111)
ERROR: Can't connect to the server
In the above snippets, you see a warning and an error, but
referring to the issue of a loss of connection, for some
reason that I do not understand.
Let's look at the other case.
When I drop the database and reload with trigger commented out,
I recieve the following result:
MariaDB [(none)]> SOURCE carpet.sql;
Query OK, 9 rows affected (0.042 sec)
Query OK, 1 row affected (0.000 sec)
Database changed
Query OK, 0 rows affected (0.018 sec)
...
...
Query OK, 0 rows affected (0.003 sec)
I do not receive any issues. From this, it appears that the
trigger is causing an issue which prevents defined functionality.
I cannot insert or do much after I have generated the error, for every
query after will result in a connection error.
Having just got my hands on triggers, would anyone happen to have
an idea of what is going on here?

Related

Informatica only insert last data row into database table

I am a newbie in informatica tool.
I run a workflow to insert data from database A, table A.A to database B, table B.B. Session did succeed.
And I met a problem in log file:
Database errors occurred:
Execute -- Informatica' ODBC 20101 driver27376073
Execute -- ODBC Driver Manager Function sequence error
When last step to insert data to B.B
It is only 1 row inserted per workflow running time. Example: I have 7 rows, only 1 row inserted and 6 rows rejected.
I search for 27376073 error code but I found nothing about it.
Can anyone help me solve this problem, please?
are you using aggregator transformation cheak groupby port marked or not
and if source is a FF and fixed width we need to do some settings in session task

Sql server drop table not working

I have a table with almost 45 million rows. I was updating a field of it with the query:
update tableName set columnX = Right(columnX, 10)
I didn't do tran or commit but directly ran the query. During the execution of query, after an hour unfortunately power failure occurred and now when i try to run select query it takes too much time and returns nothing. Even drop table doesn't work. I don't know what is the problem.
I don't know what is the problem.
SQL server is rolling back your update statement..you can monitor the status of rollback ,using many ways
1.
kill sessionid with status only
2.By using DMV
select
der.session_id,
der.command,
der.status,
der.percent_complete
from sys.dm_exec_requests as der
where command IN ('killed/rollback',’rollback’)
Dont try to restart SQLServer,as this may prolong the status..

Delphi Firedac SQL Server: no error raised when update fails because record is locked

I'm currently working on a project to migrate code base from using Advantage Database Server to SQL Server.
I'm using Firedac of XE8 linked to a Microsoft SQL Server 2014 Express.
I have a small test project. There's a TDBGrid showing the content of a table (the query lock mode is Pessimistic, lockpoint immediate).
I have another TQuery with a SQL command like this:
update myTable
set firstName = 'John'
where id = 1
What I do :
I put the first row in Edit mode (by writing something in the cell)
When I press a button, it runs executeSQL on the Update query
Nothing happens -- the update query does not go through
That's fine ... but I was expecting an error message telling me the UPDATE didn't go trough...
How can I get the same behavior but with an error message triggered ?
Essential connection settings to work with row locks :
TFDConnection.UpdateOptions.Lockmode := lmPessimistic;
TFDConnection.UpdateOptions.LockPoint := lmImmediate;
TFDConnection.UpdateOptions.LockWait := False;
Behaviour described is SQL Server waiting for the lock to be removed to finish commiting the UPDATE. By setting your FireDACconnection to 'no wait' it's going to raise an exception as soon as you attempt to do something on the row you've locked by putting the dataset in Edit. Then you can catch this exception to do what you want.

ADODB affected rows return trigger's affected rows

I have a VBA that runs a command text to update a table. The table has a trigger on UPDATE.
When I do:
Set rs = cmd1.Execute(affectedCount)
the affectedCount returns affected rows from trigger (I think).
How do I make it return the original update statement's affected row count?
Assuming you're using SQL Server, I had a similar problem a while ago. I'm not sure if it's related but ADODB would get "confused" by the "# records affected" messages that were generated by SQL Server.
We solved this by adding
SET NOCOUNT ON
To the top of affected triggers / procedures to suppress the message. You can then try running your statement from SQL Management Studio to see exactly which "# records affected" messages are being generated.
Don't know if this will help but maybe worth a try.

SQL Server 2008 Express (shared hosting) Times Out When Changing Data Type on Field

I'm changing some bad design on a table. The field I'm trying to change holds IIS7 session id's which are long numbers. I'm trying to change the field from nvarchar(20) to int. There are 349,000 records in the table.
SQL Server Management Studio times out after 35 seconds. However, if I check the query timeout setting for the connection it is set at 600 seconds (and I can't change it either).
Is there another timeout setting that could be causing this problem?
Here's the error message I'm getting:
- Unable to modify table.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
I've been able to change several other tables with no problem. Of course, they had fewer rows.
This database is on a shared hosting package at Arvixe.com. Do you think this could be part of the problem?
Can you try to run a T-SQL script instead of doing this using the visual designer?
ALTER TABLE dbo.YourTable
ALTER COLUMN YourColumn INT
Now, this will only work if all rows are truly valid INT values! Otherwise, it'll bomb out at some point....
To check if all your rows are truly valid INT, you could run this query:
SELECT * FROM dbo.YourTable
WHERE ISNUMERIC(YourColumn) = 0
This will select all rows that are not valid numerics ... if you get rows here, you have a problem...

Resources