I have used SQL Service broker and SQL Table Dependency and started SQL table dependency in a table for notifications on table data change. I have given all the permission to database listed in SQL table Dependency document. After some times, may be in idle state it is giving status as "Waiting for notification" .
When I change in table (inserting new record), status is not changing (From waiting for notification) and gives error as "The conversation handle "A705917C-4762-E711-9447-000C29C3FCF0" is not found."
Can anyone help me to fix this issue?
First read this comment please:
There is one very common scenario that results in much more time:
debugging. When you develop applications, you often spend several
minutes inside the debugger before you move on. So please be careful
when you debug an application that the value assigned to
watchDogTimeOut parameter is long enough, otherwise you will incur in
a destruction of database objects in the middle of you debug activity.
Reference
On the other hand
If you are using SQLDependency and get an error like this:
The conversation handle
"206A971D-6F25-DA11-B22F-0003FF6FCCCA" is not found. Invalid object
name 'SqlQueryNotificationService -
41136655-4314-4536-a477-37156eb628db'.
Then try enable trustworthy :
Alter database [DbName] set trustworthy on
The TRUSTWORTHY database property is used to indicate whether the
instance of SQL Server trusts the database and the contents within it.
By default, this setting is OFF, but can be set to ON by using the
ALTER DATABASE statement. more information
Thank to Scott Hanselman for his answer
Related
I have an on-premise CRM 2016 instance and I can't receive any incoming emails inside of it even though when I run the test access says everything is good.
First, I'm unable to change a queue record email address, because I keep getting a SQL timeout error (doesn't matter how much time you increase the timeout it will never change) but if I try to change any other field it works and saves (but not the email field of course).
The same with the Mailbox's records, when I try to change the email it returns a SQL timeout error.
So what I did was change these emails by SQL queries, but after that the emails still won't create inside CRM.
It shows the next warning log in the event viewer:
35241 - The recipients for the email message with subject "[x]" in mailbox [email address] did not match any known records.
I'm running out of choices here, when I run the diagnosis tool on my organization it's performance is good but there must be something obstructing the communication with the SQL? Any clues?
SQL timeout error:
Unhandled Exception: System.ServiceModel.FaultException`1[[Microsoft.Xrm.Sdk.OrganizationServiceFault, Microsoft.Xrm.Sdk, Version=8.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]: SQL timeout expired.Detail:
-2147204783
SQL timeout expired.
2018-10-10T14:14:15.5749939Z
I got the answer from Microsoft Community Forumns, thanks to Radu Chiribelea:
It's not enough to change the email address in SQL in it's base table for a record, so that this can be used for email tracking. There are other references as well - for example the EmailSearchBase. This is why you need to let the platform handle your changes.
You biggest issue here is the SQL Timeout and that is what you need to address. Since this occurs at a Create / Update I suspect there might be a deadlock somewhere. Do you have any plug-ins or workflows triggered at the time you create / update? If you disable those, do you still see the issue?
Can you enable a CRM Platform trace at a Verbose Level while reproducing the issue? This would give you a better overview of the actual timeout and you can then start from there to tackle it.
I get the following warning
WARNING: there is already a transaction in progress
in my database and i want to investigate the reason this happens. However due to the database is accessible through many microservices i cannot find which service is trying to start a new/parallel connection.
How can i increase the level of information of this message? Like a timestamp, who tried to start the connection like client_addr field or any other information that will reveal the root of the fault.
Thanks in advance
the source - starting transaction twice, example:
t=# begin;
BEGIN
Time: 22.594 ms
t=# begin;
WARNING: there is already a transaction in progress
BEGIN
Time: 1.269 ms
to see who, when, set log_min_messages to at least warning, log_line_prefix to have %h for IP and %m for time, %u for username - https://www.postgresql.org/docs/current/static/runtime-config-logging.html#RUNTIME-CONFIG-LOGGING-WHAT, (logging_collector of course on) and check logs
There's plenty you could do to find out what's going on. First, you could check PostgreSQL logs to see what's going on. If you do not have access to the logs. You can check which queries are active, idle, or idle in transaction by running the next query:
SELECT
pid,
query,
state
FROM pg_stat_activity
There you can see which transaction is currently running by adding to the query WHERE state='active'
IMPORTANT NOTE:
If your using services to access the database (specialy c# services (by experience)) you have to check your connection to the database. Because if it is not correctly configured you'll end up with services that can accept only one user per transaction and that's really dangerous.
The problem might be that you are sending your calls to the database through one connection and the 'service' never opens new connections. Therefore, PostgreSQL will reject any incoming queries and set the message:
WARNING: there is already a transaction in progress
Because the connection channel is being used by a transaccion.
I was planning to use SSIS logging to get task level details (duration of running, error message thrown-if any, user who triggered the job ) for my package.
SSIS was creating dbo.syssisLog table under System table and it was working just fine. Suddenly it stops creating table under System table and start creating under Users table. Also now it is not logging some events which were logged previously when created under System table. Events like: PackageStart and User:PackageStart/User:PackageEnd event for some tasks.
Can anyone please guide me what's going wrong here ?
The table showing under System versus User tables is fairly meaningless but if you want the table to show the same, set it as a MS shipped table
EXECUTE sys.sp_MS_marksystemobject 'sysssislog'
The way database logging works in the package deployment model, is that SSIS will attempt to log to dbo.sysdtslog90/dbo.sysssislog (depending on your version) but if that table doesn't exist, it will create it for you. There is a copy of that table in the msdb catalog which is marked as a system object. When SSIS creates its own copy, it just has the DDL somewhere in the bowels of the code that does logging. You'll notice it also creates a stored procedure sp_ssis_addlogentry to assist in the logging.
As for your observation for inconsistent logging behaviour, all I can say is I've never seen that. The only reason it won't log an event is if the event doesn't occur - either a precursor condition didn't happen or the package errors out. If you can provide a reproducible scenario where it does and then doesn't log events, I'll be happy to tell you why it does/doesn't do it.
I am trying to set up a merge replication using web synchronization between a publishing SQL Server 2012 standard and subscribing SQL Server 2012 Express. After following the instructions provided at Technet, I am stuck on this:
Source: Merge Process(Web Sync Server)
Number: -2147200985
Message: The subscription to publication 'MyMergePublication' has expired or does not exist.
I already verified that SSL certification are good, that I can browse to the publishing machine's URL https:\\mycomputer\replisapi.dll and get the expected output. I already verified that snapshot was set up and I took a giant hammer & use an administrator account to run the pool identity which is really bad security-wise but wanted to validate that it was not security that was tripping me up.
To further the mystery, when I try and fail to sync, the publisher acknowledges that a new subscriber has been registered, but it cannot get the snapshot at all and thus subscriber database is still empty.
On the replication monitor, there are no failed synchronization history, or any errors; all it has to say is that the subscriber is uninitialized, and no more.
Turning up the verbosity of the merge agent, I saw some sql being executed and tried replicating the sql and i found this was failing with same error:
{call sys.sp_MSgetreplicainfo(?,?,?,?,?,?,?,90)}
I called it with only the 3 mandatory parameters supplied and it would fail. That is despite the prior call sp_helpmergepublication does return a row for that publication. Oddly, the content of sp_helpmergepublication does not match what I configured for the subscription (e.g. it says web url is null when viewing the properties correctly shows the web url being set). Not sure that is significant.
The content of sp_MSgetreplicainfo contains a call to another system sprocs that I cannot run for some reason (says not found) so I'm not sure what is actually going on here.
Any clues would be greatly appreciated.
Have a working Service Broker set up on a server, we're in the process of moving to a new server but I can't seem to get Service Broker set up on the new box.
Have done the obvious (to me) things like Enabling Broker on the DB, dropping the route, services, contract, queues and even message type and re adding them, setting ALTER QUEUE with STATUS ON
SELECT * FROM sys.service_queues
gives me a list of the queues, including my own two, which show as activation_enabled, receive_enabled etc.
Needless to say the queues aren't working. When I drop messages into them nothing goes in and nothing comes out.
Any ideas? I'm sure there's something really obvious I've missed...
Just a shot in the dark:
ALTER AUTHORIZATION ON DATABASE::[restored db name] TO [sa];
The dbo of the restored database is the Windows SID that created the db on the original server. This may be a local SID (eg. SERVERNAME\user) that has no meaning on the new server. This problem usually affects activated procedures and may affect message delivery, both issues happening due to inability of SQL to impersonate 'dbo'. Changing dbo to a valid login SID (like sa) would fix it.
If this doesn't fix it, then you need to track down where do the messages go. If they stay in sys.transmission_queue, then you must check the transmission_status. If they reach the target queueu but no activation occurs, check ERRORLOG. If they vanish, it means you do fire-and-forget (SEND followed immediately by END) and you are therefore deleting the error message that indicates the cause. This article Troubleshooting Dialogs contains more tips where to look.
And last, but not least, try using ssbdiagnose.exe.
In addition to Remus's answer, You might also want to check the BrokerEnabled property of the restoredDB. Whenever you restore a DB, the BrokerEnabled property of the restored DB is set to False. For this reason nothing will go into your queue. To address this:
right click on the restoredDB in SSMS > goto "Properties" > "Options" >
Scroll down to the "Service Broker" group and verify the value of the "Broker
Enabled" property. If it is set to False, change it to True and this
should solve your problem.