I get the following warning
WARNING: there is already a transaction in progress
in my database and i want to investigate the reason this happens. However due to the database is accessible through many microservices i cannot find which service is trying to start a new/parallel connection.
How can i increase the level of information of this message? Like a timestamp, who tried to start the connection like client_addr field or any other information that will reveal the root of the fault.
Thanks in advance
the source - starting transaction twice, example:
t=# begin;
BEGIN
Time: 22.594 ms
t=# begin;
WARNING: there is already a transaction in progress
BEGIN
Time: 1.269 ms
to see who, when, set log_min_messages to at least warning, log_line_prefix to have %h for IP and %m for time, %u for username - https://www.postgresql.org/docs/current/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT, (logging_collector of course on) and check logs
There's plenty you could do to find out what's going on. First, you could check PostgreSQL logs to see what's going on. If you do not have access to the logs. You can check which queries are active, idle, or idle in transaction by running the next query:
SELECT
pid,
query,
state
FROM pg_stat_activity
There you can see which transaction is currently running by adding to the query WHERE state='active'
IMPORTANT NOTE:
If your using services to access the database (specialy c# services (by experience)) you have to check your connection to the database. Because if it is not correctly configured you'll end up with services that can accept only one user per transaction and that's really dangerous.
The problem might be that you are sending your calls to the database through one connection and the 'service' never opens new connections. Therefore, PostgreSQL will reject any incoming queries and set the message:
WARNING: there is already a transaction in progress
Because the connection channel is being used by a transaccion.
Related
I have an on-premise CRM 2016 instance and I can't receive any incoming emails inside of it even though when I run the test access says everything is good.
First, I'm unable to change a queue record email address, because I keep getting a SQL timeout error (doesn't matter how much time you increase the timeout it will never change) but if I try to change any other field it works and saves (but not the email field of course).
The same with the Mailbox's records, when I try to change the email it returns a SQL timeout error.
So what I did was change these emails by SQL queries, but after that the emails still won't create inside CRM.
It shows the next warning log in the event viewer:
35241 - The recipients for the email message with subject "[x]" in mailbox [email address] did not match any known records.
I'm running out of choices here, when I run the diagnosis tool on my organization it's performance is good but there must be something obstructing the communication with the SQL? Any clues?
SQL timeout error:
Unhandled Exception: System.ServiceModel.FaultException`1[[Microsoft.Xrm.Sdk.OrganizationServiceFault, Microsoft.Xrm.Sdk, Version=8.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]: SQL timeout expired.Detail:
-2147204783
SQL timeout expired.
2018-10-10T14:14:15.5749939Z
I got the answer from Microsoft Community Forumns, thanks to Radu Chiribelea:
It's not enough to change the email address in SQL in it's base table for a record, so that this can be used for email tracking. There are other references as well - for example the EmailSearchBase. This is why you need to let the platform handle your changes.
You biggest issue here is the SQL Timeout and that is what you need to address. Since this occurs at a Create / Update I suspect there might be a deadlock somewhere. Do you have any plug-ins or workflows triggered at the time you create / update? If you disable those, do you still see the issue?
Can you enable a CRM Platform trace at a Verbose Level while reproducing the issue? This would give you a better overview of the actual timeout and you can then start from there to tackle it.
I have used SQL Service broker and SQL Table Dependency and started SQL table dependency in a table for notifications on table data change. I have given all the permission to database listed in SQL table Dependency document. After some times, may be in idle state it is giving status as "Waiting for notification" .
When I change in table (inserting new record), status is not changing (From waiting for notification) and gives error as "The conversation handle "A705917C-4762-E711-9447-000C29C3FCF0" is not found."
Can anyone help me to fix this issue?
First read this comment please:
There is one very common scenario that results in much more time:
debugging. When you develop applications, you often spend several
minutes inside the debugger before you move on. So please be careful
when you debug an application that the value assigned to
watchDogTimeOut parameter is long enough, otherwise you will incur in
a destruction of database objects in the middle of you debug activity.
Reference
On the other hand
If you are using SQLDependency and get an error like this:
The conversation handle
"206A971D-6F25-DA11-B22F-0003FF6FCCCA" is not found. Invalid object
name 'SqlQueryNotificationService -
41136655-4314-4536-a477-37156eb628db'.
Then try enable trustworthy :
Alter database [DbName] set trustworthy on
The TRUSTWORTHY database property is used to indicate whether the
instance of SQL Server trusts the database and the contents within it.
By default, this setting is OFF, but can be set to ON by using the
ALTER DATABASE statement. more information
Thank to Scott Hanselman for his answer
I had a package that worked perfectly until i decided to put some of its tasks inside a sequence container (More on why I wanted to do that - How to make a SSIS transaction in my case?).
Now, i keep on getting an error -
[Execute SQL Task] Error: Failed to acquire connection "MyDatabase". Connection may not be configured correctly or you may not have the right permissions on this connection.
Why could this be happening and how do I fix it ?
I started writing my own examples to reply to your question. Then I remember that I met Matt Mason when I talked at a SQL Saturday in New Hampshire. He is the Microsoft Program Manager for SSIS.
While I spent 3 years between 2009 and 2011 writing nothing else but ETL code, I figured Matt had an article out there.
http://www.mattmasson.com/2011/12/design-pattern-avoiding-transactions/
Here is a high level summary of the approaches and the error you found.
[ERROR]
The error you found is related to MSDTC having issues. This must be configured and working correctly without any issues. Common issues are firewalls. Check out this post.
http://social.msdn.microsoft.com/Forums/sqlserver/en-US/3a5c847e-9c7e-4628-b857-4e6edaa7936c/sql-task-transaction-required?forum=sqlintegrationservices
[SOLUTION 1] - Use transactions at the package, task or container level.
Some data providers do not support MSDTC. Some tasks do not support transactions. This may be slow in performance since you are adding a new layer to support two phase commits.
http://technet.microsoft.com/en-us/library/aa213066(v=sql.80).aspx
[SOLUTION 2] - Use the following tasks.
A - BEGIN TRAN (EXECUTE SQL)
B - YOUR DATA FLOW
C - TEST THE RETURN CODE
1 - GOOD = COMMIT (EXECUTE SQL)
2 - FAILURE = ROLLBACK (EXECUTE SQL)
You must have the RetainSameConnection property set to True on the connection.
This forces all calls thru one session or SPID. All transaction management is now on the server.
[SOLUTION 3] - Write all you code so that it is restartable. This does not mean you go out and use check points.
One solution is to always use UPSERTS. Insert new data. Update old data. Deletes are only a flag in a table. This pattern allows a failed job to be executed many times with the same final state being achieved.
Another solution is to handle all error rows by placing them into a hospital table for manual inspection, correction, and insertion.
Why not use a database snapshot (keeps track of just changed records)? Take a snapshot before the ETL job. If an error occurs, restore the database from the snapshot. Last step is to remove the snapshot from the system to clean up house.
In short, I hope this is enough ideas to help you out.
While the transaction option is nice, it does have some down falls. If you need an example, just ping me.
Sincerely
J
What package protection level are you using? Don't Save Sensitive? Encrypt Sensitive with User Key? I'd recommend changing it to use Encrypt Sensitive with Password and enter a password. The password won't disappear.
Have you tried testing the connection to the database in the connection manager?
We are running a website on a vps server with sql server 2008 x64 r2. We are being bombarded with 17886 errors - namely:
The server will drop the connection, because the client driver has
sent multiple requests while the session is in single-user mode. This
error occurs when a client sends a request to reset the connection
while there are batches still running in the session, or when the
client sends a request while the session is resetting a connection.
Please contact the client driver vendor.
This causes sql statements to return corrupt results. I have tried pretty much all of the suggestions I have found on the net, including:
with mars, and without.
with pooling and without
with async=true and without
we only have one database and it is absolutely multi-user.
Everything has been installed recently so it is up to date. They may be correlated with high cpu (though not exclusively according to the monitors I have seen). Also correlated with high request rates from search engines. However, high cpu/requests shouldn't cause sql connections to reset - at worst we should have high response times or iis refusing to send response.
Any suggestions? I am only a developer not dba - do i need a dba to solve this problem?
Not sure but some of your queries might cause deadlocks on the server.
At the point you detect this error again
Open Management Studio (on the server, install it if necessary)
Open a new query window
Run sp_who2
Check the blkby column which is short for Blocked By. If there is any data in that column you have a deadlock problem (Normally it should be like the screenshot I attached, completely empty).
If you have a deadlock then we can continue with next steps. But right now please check that.
To fix the error above, ”MultipleActiveResultSets=True” needs to be added to the connection string.
via Event ID 17886 MSSQLServer – The server will drop the connection
I would create an eventlog task to email you whenever 17886 is thrown. Then go immediately to the db and execute the sp_who2, get the blkby spid and run a dbcc inputbuffer. Hopefully the eventinfo will give you something a bit more tangible to go on.
sp_who2
DBCC INPUTBUFFER(62)
GO
Use a "Instance Per Request" strategy in your DI-instantiation code and your problem will be solved
Most probably you are using dependency injection. During web development you have to take into account the possibility of concurrent requests. Therefor you have to make sure every request gets new instances during DI, otherwise you will get into concurrency issues. Don't be cheap by using ".SingleInstance" for services and contexts.
Enabling MARS will probably decrease the number of errors, but the errors that are encountered will be less clear. Enabling MARS is always never the solution, do not use this unless you know what you're doing.
I have a SQL Server [2012 Express with Advanced Services] database, with not much in it. I'm developing an application using EF Code First, and since my model is still in a state of flux, the database is getting dropped and re-created several times per day.
This morning, my application failed to connect to the database the first time I ran it. On investigation, it seems that the database is in "Recovery Pending" mode.
Looking in the event log, I can see that SQL Server has logged:
Starting up database (my database)
...roughly twice per second all night long. (The event log filled up, so I can't see beyond yesterday evening).
Those "information" log entries stop at about 6am this morning, and are immediately followed by an "error" log entry saying:
There is insufficient memory in resource pool 'internal' to run this query
What the heck happened to my database?
Note: it's just possible that I left my web application running in "debug" mode overnight - although without anyone "driving" it I can't imagine that there would be much database traffic, if any.
It's also worth mentioning that I have a full-text catalog in the database (though as I say, there's hardly any actual content in the DB at present).
I have to say, this is worrying - I would not be happy if this were to happen to my production database!
With AUTO_CLOSE ON the database will be closed as soon as there are no connections to it, and re-open (run recovery, albeit a fast paced one) every time a connection is established to it. So you were seeing the message because every 2 second your application would connect to the database. You probably always had this behavior and never noticed before. Now that your database crashed, you investigated the log and discovered this problem. While is good that now you know and will likely fix it, this does not address you real problem, namely the availability of the database.
So now you have a database that won't come out of recovery, what do you do? You restore from you last backup and apply your disaster recovery plan. Really, that's all there is to it. And there is no alternative.
If you want to understand why the crash happened (it can be any of about 1 myriad reasons...) then you need to contact CSS (Product Support). They have the means to guide you through investigation.
If you wanted to turn off this message in event log.
Just goto SQL Server Management Studio,
Right click on your database
Select Options (from left panel)
Look into "Automatic" section, and change "Auto Close" to "False"
Click okay
That's All :)
I had a similar problem with a sql express database stuck in recovery. After investigating the log it transpired that the database was starting up every couple of minutes. Running the script
select name, state_desc, is_auto_close_on from sys.databases where name = 'mydb'
revealed that auto close was set to on.
So it appears that the database is in always in recovery but is actually coming online for a brief second before going offline again because there are no client connections.
I solved this with following script.
Declare #state varchar(20)
while 1=1
begin
Select #state = state_desc from sys.databases where name='mydb';
If #state = 'ONLINE'
Begin
Alter database MyDb
Set AUTO_CLOSE_OFF;
Print 'Online'
break;
End
waitfor delay '00:00:02'
end