Environment: SQL Server 2012 Express edition.
Goal: Setup SQL Server Service Broker with external activation, calling a command line app.
What I have done so far: created message types, contracts, initiator and receiver queues and corresponding services. I also did set up a notification queue, notification service and created event notification for the TargetQ. The event notification is configured so that when an event is raised, it should (I thnk) call the NotifySvc letting it know that there is work to be processed in TargetQ. Please feel free to correct me at any point – this is very new to me.
What happens: I have a trigger on a table that (upon insert) creates a message and calls the TargetSvc. The message arrives at the TargetQ happily. And this is where everything stops. I am not sure if the event notification for queue activation is never triggered or what, but the message never makes it to the NotifyQ. Therefore my EA app is never called.
I realize I am skipping a lot of detail around setup and configuration but, as the topic is new to me, I am hoping that you guys see something obvious. Any help is much appreciated.
Related
A message is being dropped to Service bus queue with ScheduledEnqueueTimeUTC and Service Bus Connector in Logic app has trigger set to pick messages from queue at 12:05AM EST EveryDay.
Problem: Logic app has picked the same message twice one with Service bus message properties State='Scheduled' and other with state='Active' with same sequenceNumber. May i know when this happens and how can this be solved.
Problem: Logic app has picked the same message twice one with Service
bus message properties State='Scheduled' and other with state='Active'
with same sequenceNumber. May i know when this happens and how can
this be solved.
Here we discovered one of the workarounds that will meet your needs. To pick and send a message only once, we must set our settings to split on as seen below.
NOTE: I tried using Logic app standard, as this option is not available in Consumption plan
Please refer this MS DOC & SO THREAD for more information .
in my scenario, I want to have some services to be fixed (as in not needing to be updated) and as time goes by adding other services. (I'm using one DB instance, but it shouldn't matter in service broker)
I want to set up the fixed ones in a way to be able to send back a message to the initiator of any message in its queue without me changing its logic and procedures every time I add another service.
is it even possible or do I have to add more logic as new services are created?
If I'm understanding your question correctly, this is how Service Broker works by default. Which is to say that a conversation is between two parties (initiator and target). Once that conversation is established, either party can send messages on it and they will go to the other party. So, if you want to send a message back to the initiator, just send a message on the same conversation handle as the message was received on and you should be good to go.
A single process in my ASP.NET Website will be fired from different locations at ~ same time.
Trying to use SQL Server Broker to Queue the requests and execute one by one.
Enabled SQL Broker and created a Queue and Service in the database. On receiving a request; I begin a dialog conversation using only the 1 Service and log the conversation token in a table.
Have written an activated procedure to access the data passed into Queue and Initiate processing.
End conversation inside activated procedure after Processing is complete.
I have doubts regarding the pattern I am following. The conversation does not get closed correctly. In some examples; I have seen an Initiator and Target Queue pattern where conversation is closed at both endpoints. Please help me to figure out the pattern needed in this case.
UPDATE
Sorry for not updating; got busy with some other work. I changed to using 2 each of Queues(Initiator and Target), Services and the corresponding Activated procs.
The connections get closed properly now. When we insert into the Target queue while 1st request is being processed; do we need to specify any setting or command to ensure that 1st request gets completed before starting the 2nd one?
I'm using EF 6 with SQL Server 2012.
I'm trying to use SqlDependency to refresh my cached data, in a class library (DLL).
I have the following, based on whatever guides I found, but seems like it is not working, and I get no error.
enabled broker service on the db
created the broker queue, and the service
Using sa login.
I'm testing this whole thing through unit testing code, not sure maybe the notification is not instantaneous, my breakpoint in SqlDependency.OnChange never triggered.
But even if I purposely slow down the post-changes, OnChange is still not triggered.
Once I made relevant data changes, how can I know SQL Server is generating a notification in db?
Update:
Initially I found "master key encryption is required" in SQL log. By creating that key, the log no longer happened, but OnChange is still not triggered.
After many trials, and based on 1 important article:
http://www.codeproject.com/Articles/12335/Using-SqlDependency-for-data-change-events
1) After SqlDependency is initialized, must execute the SqlCommand, can simple just invoke sqlCmd.ExecuteNonQuery().
2) After OnChange event triggered, must remove the event handling, recreate a new SqlCommand and SqlDependency, rebind the event handling, and follow rule #1
I am using Service Broker external activation. I have created event notification for queue (QUEUE_ACTIVATION). Currently making a lots of tests and sometimes my queue gets deactivated.
After enabling queue Event notification does not work anymore and external activator does not start console app.
i found this, but that seems to be something else as "select * from sys.event_notifications" shows that there is already created event notification.
I do drop and create again the same event notification to make it work, but it seems to be wrong..
How can i detect automatically that event notification is not working?
You probably don't RECEIVE and commit the notification from the monitoring queue, causing the notification to go into NOTIFIED state but never transitioning to RECEIVES_OCCURING. See Understanding Queue Monitors.